Nov 1 00:29:56.026501 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:29:56.026515 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.026522 kernel: BIOS-provided physical RAM map: Nov 1 00:29:56.026526 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 00:29:56.026530 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 00:29:56.026533 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 00:29:56.026538 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 00:29:56.026542 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 00:29:56.026546 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f5ffff] usable Nov 1 00:29:56.026550 kernel: BIOS-e820: [mem 0x0000000061f60000-0x0000000061f60fff] ACPI NVS Nov 1 00:29:56.026554 kernel: BIOS-e820: [mem 0x0000000061f61000-0x0000000061f61fff] reserved Nov 1 00:29:56.026559 kernel: BIOS-e820: [mem 0x0000000061f62000-0x000000006c0c4fff] usable Nov 1 00:29:56.026563 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Nov 1 00:29:56.026567 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Nov 1 00:29:56.026573 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Nov 1 00:29:56.026577 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Nov 1 00:29:56.026583 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Nov 1 00:29:56.026587 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Nov 1 00:29:56.026592 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 00:29:56.026596 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 00:29:56.026600 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 00:29:56.026605 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 00:29:56.026609 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 00:29:56.026614 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Nov 1 00:29:56.026619 kernel: NX (Execute Disable) protection: active Nov 1 00:29:56.026623 kernel: APIC: Static calls initialized Nov 1 00:29:56.026628 kernel: SMBIOS 3.2.1 present. Nov 1 00:29:56.026633 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Nov 1 00:29:56.026638 kernel: tsc: Detected 3400.000 MHz processor Nov 1 00:29:56.026642 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 00:29:56.026647 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:29:56.026652 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:29:56.026657 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Nov 1 00:29:56.026662 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 00:29:56.026666 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:29:56.026671 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Nov 1 00:29:56.026676 kernel: Using GB pages for direct mapping Nov 1 00:29:56.026681 kernel: ACPI: Early table checksum verification disabled Nov 1 00:29:56.026686 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 00:29:56.026691 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 00:29:56.026697 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Nov 1 00:29:56.026702 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 00:29:56.026707 kernel: ACPI: FACS 0x000000006D762F80 000040 Nov 1 00:29:56.026713 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Nov 1 00:29:56.026718 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Nov 1 00:29:56.026723 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 00:29:56.026728 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 00:29:56.026733 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 00:29:56.026738 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 00:29:56.026743 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 00:29:56.026748 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 00:29:56.026754 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026759 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 00:29:56.026764 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 00:29:56.026769 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026774 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026779 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 00:29:56.026783 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 00:29:56.026789 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026794 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026799 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 00:29:56.026804 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:29:56.026809 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 00:29:56.026814 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 00:29:56.026819 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 00:29:56.026824 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xefa 01072009 AMI 00010013) Nov 1 00:29:56.026829 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 00:29:56.026834 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 00:29:56.026840 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 00:29:56.026845 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 00:29:56.026850 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 00:29:56.026855 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Nov 1 00:29:56.026860 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Nov 1 00:29:56.026865 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Nov 1 00:29:56.026870 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Nov 1 00:29:56.026875 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Nov 1 00:29:56.026880 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Nov 1 00:29:56.026885 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Nov 1 00:29:56.026890 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Nov 1 00:29:56.026895 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Nov 1 00:29:56.026900 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Nov 1 00:29:56.026905 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Nov 1 00:29:56.026910 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Nov 1 00:29:56.026915 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Nov 1 00:29:56.026920 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Nov 1 00:29:56.026926 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Nov 1 00:29:56.026931 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Nov 1 00:29:56.026936 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Nov 1 00:29:56.026941 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Nov 1 00:29:56.026946 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Nov 1 00:29:56.026950 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Nov 1 00:29:56.026955 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Nov 1 00:29:56.026960 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Nov 1 00:29:56.026965 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Nov 1 00:29:56.026970 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Nov 1 00:29:56.026976 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Nov 1 00:29:56.026981 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Nov 1 00:29:56.026986 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Nov 1 00:29:56.026991 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Nov 1 00:29:56.026995 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Nov 1 00:29:56.027000 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Nov 1 00:29:56.027005 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Nov 1 00:29:56.027010 kernel: No NUMA configuration found Nov 1 00:29:56.027015 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Nov 1 00:29:56.027021 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Nov 1 00:29:56.027026 kernel: Zone ranges: Nov 1 00:29:56.027031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:29:56.027036 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:29:56.027041 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Nov 1 00:29:56.027046 kernel: Movable zone start for each node Nov 1 00:29:56.027051 kernel: Early memory node ranges Nov 1 00:29:56.027056 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 00:29:56.027061 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 00:29:56.027066 kernel: node 0: [mem 0x0000000040400000-0x0000000061f5ffff] Nov 1 00:29:56.027071 kernel: node 0: [mem 0x0000000061f62000-0x000000006c0c4fff] Nov 1 00:29:56.027076 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Nov 1 00:29:56.027081 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Nov 1 00:29:56.027086 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Nov 1 00:29:56.027099 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Nov 1 00:29:56.027104 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:29:56.027130 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 00:29:56.027136 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 00:29:56.027142 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 00:29:56.027164 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Nov 1 00:29:56.027169 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Nov 1 00:29:56.027175 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Nov 1 00:29:56.027180 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 00:29:56.027185 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 00:29:56.027190 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 00:29:56.027196 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 00:29:56.027202 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 00:29:56.027207 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 00:29:56.027212 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 00:29:56.027218 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 00:29:56.027223 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 00:29:56.027228 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 00:29:56.027233 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 00:29:56.027239 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 00:29:56.027244 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 00:29:56.027250 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 00:29:56.027255 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 00:29:56.027260 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 00:29:56.027265 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 00:29:56.027271 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 00:29:56.027276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:29:56.027281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:29:56.027286 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:29:56.027292 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:29:56.027298 kernel: TSC deadline timer available Nov 1 00:29:56.027303 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 00:29:56.027309 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Nov 1 00:29:56.027314 kernel: Booting paravirtualized kernel on bare hardware Nov 1 00:29:56.027319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:29:56.027325 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 00:29:56.027330 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 00:29:56.027335 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 00:29:56.027341 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 00:29:56.027347 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.027353 kernel: random: crng init done Nov 1 00:29:56.027358 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 00:29:56.027363 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 00:29:56.027368 kernel: Fallback order for Node 0: 0 Nov 1 00:29:56.027374 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Nov 1 00:29:56.027379 kernel: Policy zone: Normal Nov 1 00:29:56.027384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:29:56.027390 kernel: software IO TLB: area num 16. Nov 1 00:29:56.027396 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 730364K reserved, 0K cma-reserved) Nov 1 00:29:56.027401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 00:29:56.027407 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:29:56.027412 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:29:56.027418 kernel: Dynamic Preempt: voluntary Nov 1 00:29:56.027423 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:29:56.027428 kernel: rcu: RCU event tracing is enabled. Nov 1 00:29:56.027434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 00:29:56.027440 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:29:56.027446 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:29:56.027451 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:29:56.027456 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:29:56.027461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 00:29:56.027467 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 00:29:56.027472 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:29:56.027477 kernel: Console: colour dummy device 80x25 Nov 1 00:29:56.027482 kernel: printk: console [tty0] enabled Nov 1 00:29:56.027488 kernel: printk: console [ttyS1] enabled Nov 1 00:29:56.027494 kernel: ACPI: Core revision 20230628 Nov 1 00:29:56.027499 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Nov 1 00:29:56.027505 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:29:56.027510 kernel: DMAR: Host address width 39 Nov 1 00:29:56.027515 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Nov 1 00:29:56.027521 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Nov 1 00:29:56.027526 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 00:29:56.027531 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 00:29:56.027536 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Nov 1 00:29:56.027543 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Nov 1 00:29:56.027548 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Nov 1 00:29:56.027553 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 00:29:56.027559 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 00:29:56.027564 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 00:29:56.027569 kernel: x2apic enabled Nov 1 00:29:56.027575 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 00:29:56.027580 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:29:56.027585 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 00:29:56.027592 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 00:29:56.027597 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 00:29:56.027602 kernel: process: using mwait in idle threads Nov 1 00:29:56.027608 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:29:56.027613 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:29:56.027618 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:29:56.027623 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:29:56.027629 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:29:56.027634 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 00:29:56.027640 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 00:29:56.027646 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 00:29:56.027651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:29:56.027656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:29:56.027661 kernel: TAA: Mitigation: TSX disabled Nov 1 00:29:56.027667 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 00:29:56.027672 kernel: SRBDS: Mitigation: Microcode Nov 1 00:29:56.027677 kernel: GDS: Mitigation: Microcode Nov 1 00:29:56.027683 kernel: active return thunk: its_return_thunk Nov 1 00:29:56.027689 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:29:56.027694 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 00:29:56.027699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:29:56.027704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:29:56.027710 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:29:56.027715 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:29:56.027720 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:29:56.027725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:29:56.027731 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:29:56.027737 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:29:56.027742 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 00:29:56.027748 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:29:56.027753 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:29:56.027758 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:29:56.027764 kernel: landlock: Up and running. Nov 1 00:29:56.027769 kernel: SELinux: Initializing. Nov 1 00:29:56.027774 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.027780 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.027786 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 00:29:56.027791 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027797 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027802 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027807 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 00:29:56.027813 kernel: ... version: 4 Nov 1 00:29:56.027818 kernel: ... bit width: 48 Nov 1 00:29:56.027823 kernel: ... generic registers: 4 Nov 1 00:29:56.027829 kernel: ... value mask: 0000ffffffffffff Nov 1 00:29:56.027835 kernel: ... max period: 00007fffffffffff Nov 1 00:29:56.027840 kernel: ... fixed-purpose events: 3 Nov 1 00:29:56.027845 kernel: ... event mask: 000000070000000f Nov 1 00:29:56.027850 kernel: signal: max sigframe size: 2032 Nov 1 00:29:56.027856 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 00:29:56.027861 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:29:56.027866 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:29:56.027871 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 00:29:56.027878 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:29:56.027883 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:29:56.027888 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 00:29:56.027894 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:29:56.027899 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 00:29:56.027904 kernel: smpboot: Max logical packages: 1 Nov 1 00:29:56.027910 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 00:29:56.027915 kernel: devtmpfs: initialized Nov 1 00:29:56.027920 kernel: x86/mm: Memory block size: 128MB Nov 1 00:29:56.027926 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f60000-0x61f60fff] (4096 bytes) Nov 1 00:29:56.027932 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Nov 1 00:29:56.027937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:29:56.027943 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.027948 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:29:56.027953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:29:56.027958 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:29:56.027964 kernel: audit: type=2000 audit(1761956990.122:1): state=initialized audit_enabled=0 res=1 Nov 1 00:29:56.027969 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:29:56.027975 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:29:56.027980 kernel: cpuidle: using governor menu Nov 1 00:29:56.027986 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:29:56.027991 kernel: dca service started, version 1.12.1 Nov 1 00:29:56.027996 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 00:29:56.028002 kernel: PCI: Using configuration type 1 for base access Nov 1 00:29:56.028007 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 00:29:56.028012 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:29:56.028017 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:29:56.028023 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:29:56.028029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:29:56.028034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:29:56.028039 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:29:56.028045 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:29:56.028050 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:29:56.028055 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 00:29:56.028060 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028066 kernel: ACPI: SSDT 0xFFFF968D01D18400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 00:29:56.028072 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028077 kernel: ACPI: SSDT 0xFFFF968D01D0A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 00:29:56.028082 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028088 kernel: ACPI: SSDT 0xFFFF968D00249B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 00:29:56.028096 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028101 kernel: ACPI: SSDT 0xFFFF968D0243E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 00:29:56.028125 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028130 kernel: ACPI: SSDT 0xFFFF968D0012D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 00:29:56.028135 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028141 kernel: ACPI: SSDT 0xFFFF968D01D19000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 00:29:56.028161 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 00:29:56.028166 kernel: ACPI: Interpreter enabled Nov 1 00:29:56.028171 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:29:56.028177 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:29:56.028182 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 00:29:56.028187 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 00:29:56.028192 kernel: HEST: Table parsing has been initialized. Nov 1 00:29:56.028198 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 00:29:56.028203 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:29:56.028209 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:29:56.028214 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 00:29:56.028220 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 00:29:56.028225 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 00:29:56.028231 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 00:29:56.028236 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 00:29:56.028241 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 00:29:56.028246 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 1 00:29:56.028251 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 00:29:56.028258 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 00:29:56.028263 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 00:29:56.028268 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 00:29:56.028273 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 00:29:56.028279 kernel: ACPI: \PIN_: New power resource Nov 1 00:29:56.028284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 00:29:56.028367 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:29:56.028450 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 00:29:56.028503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 00:29:56.028511 kernel: PCI host bridge to bus 0000:00 Nov 1 00:29:56.028561 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:29:56.028607 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:29:56.028651 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:29:56.028694 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Nov 1 00:29:56.028739 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 00:29:56.028782 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 00:29:56.028842 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 00:29:56.028899 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 00:29:56.028950 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.029005 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Nov 1 00:29:56.029057 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.029117 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Nov 1 00:29:56.029168 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Nov 1 00:29:56.029218 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Nov 1 00:29:56.029266 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Nov 1 00:29:56.029320 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 00:29:56.029369 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Nov 1 00:29:56.029425 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 00:29:56.029475 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Nov 1 00:29:56.029531 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 00:29:56.029581 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Nov 1 00:29:56.029630 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 00:29:56.029684 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 00:29:56.029741 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Nov 1 00:29:56.029794 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Nov 1 00:29:56.029847 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 00:29:56.029898 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.029951 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 00:29:56.030001 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.030056 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 00:29:56.030111 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Nov 1 00:29:56.030161 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 00:29:56.030216 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 00:29:56.030266 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Nov 1 00:29:56.030315 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 00:29:56.030372 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 00:29:56.030425 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Nov 1 00:29:56.030475 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 00:29:56.030527 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 00:29:56.030577 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Nov 1 00:29:56.030629 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Nov 1 00:29:56.030678 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Nov 1 00:29:56.030726 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Nov 1 00:29:56.030778 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Nov 1 00:29:56.030826 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Nov 1 00:29:56.030875 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 00:29:56.030929 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 00:29:56.030983 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031037 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 00:29:56.031088 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031147 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 00:29:56.031198 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031253 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 00:29:56.031306 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031361 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Nov 1 00:29:56.031411 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031469 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 00:29:56.031519 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.031573 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 00:29:56.031628 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 00:29:56.031678 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Nov 1 00:29:56.031727 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 00:29:56.031780 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 00:29:56.031829 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 00:29:56.031880 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:29:56.031935 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 00:29:56.031990 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 00:29:56.032042 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Nov 1 00:29:56.032095 kernel: pci 0000:02:00.0: PME# supported from D3cold Nov 1 00:29:56.032148 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:29:56.032202 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:29:56.032258 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 00:29:56.032310 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 00:29:56.032364 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Nov 1 00:29:56.032414 kernel: pci 0000:02:00.1: PME# supported from D3cold Nov 1 00:29:56.032465 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:29:56.032515 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:29:56.032567 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 00:29:56.032617 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.032667 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.032719 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 00:29:56.032777 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 00:29:56.032829 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:29:56.032880 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Nov 1 00:29:56.032933 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 00:29:56.032984 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Nov 1 00:29:56.033038 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.033095 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 00:29:56.033147 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:29:56.033198 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.033253 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Nov 1 00:29:56.033305 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:29:56.033356 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Nov 1 00:29:56.033407 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 00:29:56.033458 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Nov 1 00:29:56.033511 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.033563 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 00:29:56.033612 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:29:56.033663 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.033712 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 00:29:56.033769 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 00:29:56.033821 kernel: pci 0000:07:00.0: enabling Extended Tags Nov 1 00:29:56.033875 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 00:29:56.033925 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:29:56.033976 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 00:29:56.034026 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.034076 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.034135 kernel: pci_bus 0000:08: extended config space not accessible Nov 1 00:29:56.034193 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 00:29:56.034250 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Nov 1 00:29:56.034304 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Nov 1 00:29:56.034358 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 00:29:56.034412 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:29:56.034464 kernel: pci 0000:08:00.0: supports D1 D2 Nov 1 00:29:56.034517 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:29:56.034568 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 00:29:56.034620 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.034673 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.034682 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 00:29:56.034688 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 00:29:56.034694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 00:29:56.034700 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 00:29:56.034705 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 00:29:56.034711 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 00:29:56.034717 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 00:29:56.034724 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 00:29:56.034730 kernel: iommu: Default domain type: Translated Nov 1 00:29:56.034736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:29:56.034742 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:29:56.034747 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:29:56.034753 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 00:29:56.034759 kernel: e820: reserve RAM buffer [mem 0x61f60000-0x63ffffff] Nov 1 00:29:56.034764 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Nov 1 00:29:56.034770 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Nov 1 00:29:56.034776 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Nov 1 00:29:56.034829 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Nov 1 00:29:56.034883 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Nov 1 00:29:56.034935 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:29:56.034943 kernel: vgaarb: loaded Nov 1 00:29:56.034950 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 00:29:56.034956 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Nov 1 00:29:56.034961 kernel: clocksource: Switched to clocksource tsc-early Nov 1 00:29:56.034967 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:29:56.034975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:29:56.034980 kernel: pnp: PnP ACPI init Nov 1 00:29:56.035034 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 00:29:56.035083 kernel: pnp 00:02: [dma 0 disabled] Nov 1 00:29:56.035158 kernel: pnp 00:03: [dma 0 disabled] Nov 1 00:29:56.035206 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 00:29:56.035251 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 00:29:56.035301 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 00:29:56.035347 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 00:29:56.035391 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 00:29:56.035436 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 00:29:56.035481 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 00:29:56.035525 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 00:29:56.035572 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 00:29:56.035619 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 00:29:56.035667 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 00:29:56.035712 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 00:29:56.035756 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 00:29:56.035799 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 00:29:56.035843 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 00:29:56.035889 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 00:29:56.035934 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 00:29:56.035983 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 00:29:56.035992 kernel: pnp: PnP ACPI: found 9 devices Nov 1 00:29:56.035998 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:29:56.036004 kernel: NET: Registered PF_INET protocol family Nov 1 00:29:56.036011 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036017 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.036024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.036029 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036035 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036041 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 00:29:56.036047 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.036052 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.036058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:29:56.036064 kernel: NET: Registered PF_XDP protocol family Nov 1 00:29:56.036158 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Nov 1 00:29:56.036211 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Nov 1 00:29:56.036261 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Nov 1 00:29:56.036310 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:29:56.036361 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036415 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036465 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036517 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036566 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 00:29:56.036616 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.036665 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.036714 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 00:29:56.036762 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 00:29:56.036814 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:29:56.036862 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.036910 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 00:29:56.036960 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:29:56.037007 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.037057 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 00:29:56.037133 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 00:29:56.037185 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.037236 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.037288 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 00:29:56.037337 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.037387 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.037433 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 00:29:56.037477 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:29:56.037521 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:29:56.037565 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:29:56.037608 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Nov 1 00:29:56.037652 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 00:29:56.037704 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.037751 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.037801 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Nov 1 00:29:56.037848 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.037898 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 00:29:56.037943 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.037996 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 00:29:56.038041 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.038089 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Nov 1 00:29:56.038157 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.038165 kernel: PCI: CLS 64 bytes, default 64 Nov 1 00:29:56.038172 kernel: DMAR: No ATSR found Nov 1 00:29:56.038177 kernel: DMAR: No SATC found Nov 1 00:29:56.038185 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Nov 1 00:29:56.038191 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Nov 1 00:29:56.038197 kernel: DMAR: IOMMU feature nwfs inconsistent Nov 1 00:29:56.038202 kernel: DMAR: IOMMU feature pasid inconsistent Nov 1 00:29:56.038208 kernel: DMAR: IOMMU feature eafs inconsistent Nov 1 00:29:56.038214 kernel: DMAR: IOMMU feature prs inconsistent Nov 1 00:29:56.038219 kernel: DMAR: IOMMU feature nest inconsistent Nov 1 00:29:56.038225 kernel: DMAR: IOMMU feature mts inconsistent Nov 1 00:29:56.038231 kernel: DMAR: IOMMU feature sc_support inconsistent Nov 1 00:29:56.038237 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Nov 1 00:29:56.038243 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 00:29:56.038249 kernel: DMAR: dmar1: Using Queued invalidation Nov 1 00:29:56.038297 kernel: pci 0000:00:02.0: Adding to iommu group 0 Nov 1 00:29:56.038347 kernel: pci 0000:00:00.0: Adding to iommu group 1 Nov 1 00:29:56.038397 kernel: pci 0000:00:01.0: Adding to iommu group 2 Nov 1 00:29:56.038445 kernel: pci 0000:00:01.1: Adding to iommu group 2 Nov 1 00:29:56.038495 kernel: pci 0000:00:08.0: Adding to iommu group 3 Nov 1 00:29:56.038544 kernel: pci 0000:00:12.0: Adding to iommu group 4 Nov 1 00:29:56.038595 kernel: pci 0000:00:14.0: Adding to iommu group 5 Nov 1 00:29:56.038644 kernel: pci 0000:00:14.2: Adding to iommu group 5 Nov 1 00:29:56.038692 kernel: pci 0000:00:15.0: Adding to iommu group 6 Nov 1 00:29:56.038741 kernel: pci 0000:00:15.1: Adding to iommu group 6 Nov 1 00:29:56.038789 kernel: pci 0000:00:16.0: Adding to iommu group 7 Nov 1 00:29:56.038838 kernel: pci 0000:00:16.1: Adding to iommu group 7 Nov 1 00:29:56.038886 kernel: pci 0000:00:16.4: Adding to iommu group 7 Nov 1 00:29:56.038935 kernel: pci 0000:00:17.0: Adding to iommu group 8 Nov 1 00:29:56.038986 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Nov 1 00:29:56.039036 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Nov 1 00:29:56.039085 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Nov 1 00:29:56.039181 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Nov 1 00:29:56.039230 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Nov 1 00:29:56.039278 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Nov 1 00:29:56.039326 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Nov 1 00:29:56.039375 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Nov 1 00:29:56.039426 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Nov 1 00:29:56.039477 kernel: pci 0000:02:00.0: Adding to iommu group 2 Nov 1 00:29:56.039528 kernel: pci 0000:02:00.1: Adding to iommu group 2 Nov 1 00:29:56.039578 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 00:29:56.039629 kernel: pci 0000:05:00.0: Adding to iommu group 17 Nov 1 00:29:56.039679 kernel: pci 0000:07:00.0: Adding to iommu group 18 Nov 1 00:29:56.039732 kernel: pci 0000:08:00.0: Adding to iommu group 18 Nov 1 00:29:56.039740 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 00:29:56.039748 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:29:56.039754 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Nov 1 00:29:56.039759 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Nov 1 00:29:56.039765 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 00:29:56.039771 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 00:29:56.039777 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 00:29:56.039782 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Nov 1 00:29:56.039836 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 00:29:56.039845 kernel: Initialise system trusted keyrings Nov 1 00:29:56.039852 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 00:29:56.039858 kernel: Key type asymmetric registered Nov 1 00:29:56.039863 kernel: Asymmetric key parser 'x509' registered Nov 1 00:29:56.039869 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:29:56.039874 kernel: io scheduler mq-deadline registered Nov 1 00:29:56.039880 kernel: io scheduler kyber registered Nov 1 00:29:56.039886 kernel: io scheduler bfq registered Nov 1 00:29:56.039935 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Nov 1 00:29:56.039987 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Nov 1 00:29:56.040036 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Nov 1 00:29:56.040086 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Nov 1 00:29:56.040181 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Nov 1 00:29:56.040230 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Nov 1 00:29:56.040279 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Nov 1 00:29:56.040332 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 00:29:56.040343 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 00:29:56.040349 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 00:29:56.040355 kernel: pstore: Using crash dump compression: deflate Nov 1 00:29:56.040360 kernel: pstore: Registered erst as persistent store backend Nov 1 00:29:56.040366 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:29:56.040372 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:29:56.040377 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:29:56.040383 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:29:56.040432 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 00:29:56.040442 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:29:56.040487 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 00:29:56.040532 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 00:29:56.040578 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T00:29:54 UTC (1761956994) Nov 1 00:29:56.040622 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 00:29:56.040630 kernel: intel_pstate: Intel P-state driver initializing Nov 1 00:29:56.040636 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 00:29:56.040642 kernel: intel_pstate: HWP enabled Nov 1 00:29:56.040649 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 00:29:56.040655 kernel: vesafb: scrolling: redraw Nov 1 00:29:56.040661 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 00:29:56.040666 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x00000000992d366d, using 768k, total 768k Nov 1 00:29:56.040672 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:29:56.040678 kernel: fb0: VESA VGA frame buffer device Nov 1 00:29:56.040684 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:29:56.040689 kernel: Segment Routing with IPv6 Nov 1 00:29:56.040695 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:29:56.040701 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:29:56.040707 kernel: Key type dns_resolver registered Nov 1 00:29:56.040713 kernel: microcode: Current revision: 0x000000fc Nov 1 00:29:56.040718 kernel: microcode: Updated early from: 0x000000de Nov 1 00:29:56.040724 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 00:29:56.040729 kernel: IPI shorthand broadcast: enabled Nov 1 00:29:56.040735 kernel: sched_clock: Marking stable (1737000659, 1373976193)->(4575120178, -1464143326) Nov 1 00:29:56.040741 kernel: registered taskstats version 1 Nov 1 00:29:56.040746 kernel: Loading compiled-in X.509 certificates Nov 1 00:29:56.040753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:29:56.040759 kernel: Key type .fscrypt registered Nov 1 00:29:56.040764 kernel: Key type fscrypt-provisioning registered Nov 1 00:29:56.040770 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:29:56.040776 kernel: ima: No architecture policies found Nov 1 00:29:56.040781 kernel: clk: Disabling unused clocks Nov 1 00:29:56.040787 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:29:56.040793 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:29:56.040798 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:29:56.040805 kernel: Run /init as init process Nov 1 00:29:56.040811 kernel: with arguments: Nov 1 00:29:56.040816 kernel: /init Nov 1 00:29:56.040822 kernel: with environment: Nov 1 00:29:56.040828 kernel: HOME=/ Nov 1 00:29:56.040833 kernel: TERM=linux Nov 1 00:29:56.040840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:29:56.040848 systemd[1]: Detected architecture x86-64. Nov 1 00:29:56.040854 systemd[1]: Running in initrd. Nov 1 00:29:56.040860 systemd[1]: No hostname configured, using default hostname. Nov 1 00:29:56.040866 systemd[1]: Hostname set to . Nov 1 00:29:56.040871 systemd[1]: Initializing machine ID from random generator. Nov 1 00:29:56.040877 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:29:56.040883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:29:56.040889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:29:56.040896 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:29:56.040902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:29:56.040908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:29:56.040914 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:29:56.040921 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:29:56.040927 kernel: tsc: Refined TSC clocksource calibration: 3408.094 MHz Nov 1 00:29:56.040933 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31202cc47c0, max_idle_ns: 440795231130 ns Nov 1 00:29:56.040939 kernel: clocksource: Switched to clocksource tsc Nov 1 00:29:56.040945 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:29:56.040951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:29:56.040957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:29:56.040963 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:29:56.040969 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:29:56.040975 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:29:56.040980 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:29:56.040986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:29:56.040993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:29:56.040999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:29:56.041005 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:29:56.041011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:29:56.041016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:29:56.041023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:29:56.041028 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:29:56.041034 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:29:56.041041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:29:56.041047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:29:56.041053 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:29:56.041059 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:29:56.041075 systemd-journald[265]: Collecting audit messages is disabled. Nov 1 00:29:56.041093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:29:56.041100 systemd-journald[265]: Journal started Nov 1 00:29:56.041133 systemd-journald[265]: Runtime Journal (/run/log/journal/ad837761db5d4bafb7fc71bea222888d) is 8.0M, max 636.6M, 628.6M free. Nov 1 00:29:56.054082 systemd-modules-load[266]: Inserted module 'overlay' Nov 1 00:29:56.076118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:56.104888 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:29:56.167136 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:29:56.167168 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:29:56.167178 kernel: Bridge firewalling registered Nov 1 00:29:56.163278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:29:56.167068 systemd-modules-load[266]: Inserted module 'br_netfilter' Nov 1 00:29:56.188415 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:29:56.213475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:29:56.231479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:56.268351 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:56.268780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:29:56.269130 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:29:56.269506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:29:56.273843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:29:56.274549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:29:56.274686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:29:56.275384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:29:56.276357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:29:56.280042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:29:56.284356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:56.295808 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:29:56.297892 systemd-resolved[299]: Positive Trust Anchors: Nov 1 00:29:56.297905 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:29:56.297945 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:29:56.299965 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 1 00:29:56.318344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:29:56.339295 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:29:56.449197 dracut-cmdline[310]: dracut-dracut-053 Nov 1 00:29:56.449197 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.516097 kernel: SCSI subsystem initialized Nov 1 00:29:56.539146 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:29:56.562141 kernel: iscsi: registered transport (tcp) Nov 1 00:29:56.594348 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:29:56.594365 kernel: QLogic iSCSI HBA Driver Nov 1 00:29:56.627563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:29:56.659332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:29:56.721816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:29:56.721835 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:29:56.741458 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:29:56.801152 kernel: raid6: avx2x4 gen() 53021 MB/s Nov 1 00:29:56.833154 kernel: raid6: avx2x2 gen() 53006 MB/s Nov 1 00:29:56.869469 kernel: raid6: avx2x1 gen() 45279 MB/s Nov 1 00:29:56.869488 kernel: raid6: using algorithm avx2x4 gen() 53021 MB/s Nov 1 00:29:56.916535 kernel: raid6: .... xor() 19905 MB/s, rmw enabled Nov 1 00:29:56.916555 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:29:56.958146 kernel: xor: automatically using best checksumming function avx Nov 1 00:29:57.076100 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:29:57.081498 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:29:57.119393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:29:57.126792 systemd-udevd[496]: Using default interface naming scheme 'v255'. Nov 1 00:29:57.130219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:29:57.174361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:29:57.218117 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Nov 1 00:29:57.244712 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:29:57.266387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:29:57.329227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:29:57.354149 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:29:57.354176 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:29:57.363278 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:29:57.390100 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:29:57.400102 kernel: PTP clock support registered Nov 1 00:29:57.400148 kernel: libata version 3.00 loaded. Nov 1 00:29:57.401400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:29:57.498152 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:29:57.498168 kernel: ACPI: bus type USB registered Nov 1 00:29:57.498178 kernel: usbcore: registered new interface driver usbfs Nov 1 00:29:57.498187 kernel: usbcore: registered new interface driver hub Nov 1 00:29:57.498196 kernel: usbcore: registered new device driver usb Nov 1 00:29:57.498205 kernel: AES CTR mode by8 optimization enabled Nov 1 00:29:57.401444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:57.502095 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 00:29:57.510163 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:57.556657 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Nov 1 00:29:57.556763 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 00:29:57.556832 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 00:29:57.530152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:29:57.608140 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 00:29:57.608154 kernel: scsi host0: ahci Nov 1 00:29:57.608245 kernel: scsi host1: ahci Nov 1 00:29:57.608265 kernel: igb 0000:04:00.0: added PHC on eth0 Nov 1 00:29:57.530219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:58.060620 kernel: scsi host2: ahci Nov 1 00:29:58.060759 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:29:58.060841 kernel: scsi host3: ahci Nov 1 00:29:58.060911 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:dc Nov 1 00:29:58.060979 kernel: scsi host4: ahci Nov 1 00:29:58.061042 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Nov 1 00:29:58.061112 kernel: scsi host5: ahci Nov 1 00:29:58.061174 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:29:58.061240 kernel: scsi host6: ahci Nov 1 00:29:58.061301 kernel: igb 0000:05:00.0: added PHC on eth1 Nov 1 00:29:58.061370 kernel: scsi host7: ahci Nov 1 00:29:58.061436 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:29:58.061499 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Nov 1 00:29:58.061508 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:dd Nov 1 00:29:58.061570 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Nov 1 00:29:58.061579 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Nov 1 00:29:58.061641 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Nov 1 00:29:58.061650 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:29:58.061713 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Nov 1 00:29:58.061721 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Nov 1 00:29:58.061728 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Nov 1 00:29:58.061736 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Nov 1 00:29:58.061743 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Nov 1 00:29:58.061750 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:29:58.061815 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Nov 1 00:29:58.061881 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 00:29:58.061944 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:29:58.062007 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 00:29:58.062069 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:29:57.594645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:58.148498 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 00:29:58.148580 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 00:29:58.148647 kernel: hub 1-0:1.0: USB hub found Nov 1 00:29:58.148718 kernel: hub 1-0:1.0: 16 ports detected Nov 1 00:29:58.148779 kernel: hub 2-0:1.0: USB hub found Nov 1 00:29:58.148843 kernel: hub 2-0:1.0: 10 ports detected Nov 1 00:29:58.063250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:58.171351 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:29:58.181789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:29:58.181816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:29:58.181842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:29:58.192245 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:29:58.434995 kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435012 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 00:29:58.435119 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:29:58.435133 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Nov 1 00:29:58.435214 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435224 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:29:58.435233 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:29:58.435243 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435252 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435261 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435270 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 00:29:58.435289 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:29:58.435300 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.429827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:58.465142 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:29:58.465160 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:29:58.472519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:58.670009 kernel: ata1.00: Features: NCQ-prio Nov 1 00:29:58.670023 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:29:58.670124 kernel: ata2.00: Features: NCQ-prio Nov 1 00:29:58.670132 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Nov 1 00:29:58.670203 kernel: ata1.00: configured for UDMA/133 Nov 1 00:29:58.670211 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:29:58.670276 kernel: ata2.00: configured for UDMA/133 Nov 1 00:29:58.670284 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:29:58.670356 kernel: hub 1-14:1.0: USB hub found Nov 1 00:29:58.670432 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:29:58.670503 kernel: hub 1-14:1.0: 4 ports detected Nov 1 00:29:58.670583 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Nov 1 00:29:58.692677 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:58.692696 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:29:58.692704 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:29:58.722061 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:29:58.722186 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 00:29:58.722266 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 00:29:58.722344 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:29:58.722420 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 00:29:58.723145 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Nov 1 00:29:58.732057 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 00:29:58.732147 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:29:58.732222 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 00:29:58.742074 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 00:29:58.742157 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:29:58.751296 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:29:59.135031 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.135047 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 00:29:59.135147 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 00:29:59.135218 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:29:59.135227 kernel: GPT:9289727 != 937703087 Nov 1 00:29:59.135234 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:29:59.135241 kernel: GPT:9289727 != 937703087 Nov 1 00:29:59.135248 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:29:59.135255 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.135262 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 00:29:59.135327 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Nov 1 00:29:59.135396 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:29:59.135404 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 00:29:59.135517 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 00:29:59.135583 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:29:59.135648 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (564) Nov 1 00:29:59.135657 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (541) Nov 1 00:29:59.135666 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Nov 1 00:29:59.190098 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:29:59.209140 kernel: usbcore: registered new interface driver usbhid Nov 1 00:29:59.209161 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Nov 1 00:29:59.209249 kernel: usbhid: USB HID core driver Nov 1 00:29:59.216987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 00:29:59.240096 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 00:29:59.243531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 00:29:59.297456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:59.321260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 00:29:59.396906 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 00:29:59.397007 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 00:29:59.397016 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 00:29:59.348332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 00:29:59.432973 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 00:29:59.458178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:29:59.492074 disk-uuid[737]: Primary Header is updated. Nov 1 00:29:59.492074 disk-uuid[737]: Secondary Entries is updated. Nov 1 00:29:59.492074 disk-uuid[737]: Secondary Header is updated. Nov 1 00:29:59.561183 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.561194 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.561201 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.561212 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.587770 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.608096 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:30:00.587815 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:30:00.608020 disk-uuid[738]: The operation has completed successfully. Nov 1 00:30:00.617228 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:30:00.648417 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:30:00.648483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:30:00.685387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:30:00.725348 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:30:00.725363 sh[755]: Success Nov 1 00:30:00.768598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:30:00.790448 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:30:00.799475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:30:00.855855 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:30:00.855898 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:00.877937 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:30:00.897714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:30:00.916425 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:30:00.957148 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:30:00.959630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:30:00.968557 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:30:00.974316 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:30:01.090934 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:01.090952 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:01.090960 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:01.090967 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:01.090974 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:01.078299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:30:01.137330 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:01.124622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:30:01.147516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:30:01.178362 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:30:01.189039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:30:01.242907 ignition[936]: Ignition 2.19.0 Nov 1 00:30:01.242914 ignition[936]: Stage: fetch-offline Nov 1 00:30:01.245357 unknown[936]: fetched base config from "system" Nov 1 00:30:01.242938 ignition[936]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:01.245361 unknown[936]: fetched user config from "system" Nov 1 00:30:01.242945 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:01.246334 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:30:01.243019 ignition[936]: parsed url from cmdline: "" Nov 1 00:30:01.248931 systemd-networkd[938]: lo: Link UP Nov 1 00:30:01.243022 ignition[936]: no config URL provided Nov 1 00:30:01.248934 systemd-networkd[938]: lo: Gained carrier Nov 1 00:30:01.243026 ignition[936]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:30:01.251647 systemd-networkd[938]: Enumeration completed Nov 1 00:30:01.243061 ignition[936]: parsing config with SHA512: 79211332c7063c460813a592a7b77bf791132971ea4fc664d1f7873475cbf514cb067fae4caca4fbfd34f774428fdc62ecb2657f92c809dc9c55e5329e11dbeb Nov 1 00:30:01.252575 systemd-networkd[938]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.245575 ignition[936]: fetch-offline: fetch-offline passed Nov 1 00:30:01.261480 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:30:01.245577 ignition[936]: POST message to Packet Timeline Nov 1 00:30:01.278581 systemd[1]: Reached target network.target - Network. Nov 1 00:30:01.245580 ignition[936]: POST Status error: resource requires networking Nov 1 00:30:01.281689 systemd-networkd[938]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.245616 ignition[936]: Ignition finished successfully Nov 1 00:30:01.293341 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:30:01.322879 ignition[951]: Ignition 2.19.0 Nov 1 00:30:01.301368 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:30:01.322883 ignition[951]: Stage: kargs Nov 1 00:30:01.495325 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 1 00:30:01.310710 systemd-networkd[938]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.322996 ignition[951]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:01.490685 systemd-networkd[938]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.323003 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:01.323548 ignition[951]: kargs: kargs passed Nov 1 00:30:01.323551 ignition[951]: POST message to Packet Timeline Nov 1 00:30:01.323560 ignition[951]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:01.324002 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40902->[::1]:53: read: connection refused Nov 1 00:30:01.524767 ignition[951]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 00:30:01.525739 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34548->[::1]:53: read: connection refused Nov 1 00:30:01.735129 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 00:30:01.736335 systemd-networkd[938]: eno1: Link UP Nov 1 00:30:01.736530 systemd-networkd[938]: eno2: Link UP Nov 1 00:30:01.736642 systemd-networkd[938]: enp2s0f0np0: Link UP Nov 1 00:30:01.736770 systemd-networkd[938]: enp2s0f0np0: Gained carrier Nov 1 00:30:01.746252 systemd-networkd[938]: enp2s0f1np1: Link UP Nov 1 00:30:01.766240 systemd-networkd[938]: enp2s0f0np0: DHCPv4 address 139.178.94.145/31, gateway 139.178.94.144 acquired from 145.40.83.140 Nov 1 00:30:01.926042 ignition[951]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 00:30:01.927127 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59876->[::1]:53: read: connection refused Nov 1 00:30:02.544759 systemd-networkd[938]: enp2s0f1np1: Gained carrier Nov 1 00:30:02.727545 ignition[951]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 00:30:02.728677 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48192->[::1]:53: read: connection refused Nov 1 00:30:03.120601 systemd-networkd[938]: enp2s0f0np0: Gained IPv6LL Nov 1 00:30:04.208568 systemd-networkd[938]: enp2s0f1np1: Gained IPv6LL Nov 1 00:30:04.330354 ignition[951]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 00:30:04.331466 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60921->[::1]:53: read: connection refused Nov 1 00:30:07.534998 ignition[951]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 00:30:08.619421 ignition[951]: GET result: OK Nov 1 00:30:09.817945 ignition[951]: Ignition finished successfully Nov 1 00:30:09.823819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:30:09.846352 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:30:09.853856 ignition[969]: Ignition 2.19.0 Nov 1 00:30:09.853861 ignition[969]: Stage: disks Nov 1 00:30:09.853972 ignition[969]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:09.853979 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:09.854512 ignition[969]: disks: disks passed Nov 1 00:30:09.854515 ignition[969]: POST message to Packet Timeline Nov 1 00:30:09.854524 ignition[969]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:11.129978 ignition[969]: GET result: OK Nov 1 00:30:11.882644 ignition[969]: Ignition finished successfully Nov 1 00:30:11.886308 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:30:11.902409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:30:11.920348 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:30:11.942524 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:30:11.963466 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:30:11.984483 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:30:12.022357 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:30:12.058268 systemd-fsck[985]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:30:12.068647 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:30:12.076416 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:30:12.203094 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:30:12.203302 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:30:12.213608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:30:12.250268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:30:12.259057 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:30:12.400357 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (994) Nov 1 00:30:12.400371 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:12.400380 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:12.400387 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:12.400394 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:12.400401 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:12.392206 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:30:12.411654 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 00:30:12.433336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:30:12.483340 coreos-metadata[1011]: Nov 01 00:30:12.458 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:12.433353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:30:12.434301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:30:12.531185 coreos-metadata[1012]: Nov 01 00:30:12.470 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:12.464360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:30:12.502215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:30:12.562218 initrd-setup-root[1026]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:30:12.573164 initrd-setup-root[1033]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:30:12.584123 initrd-setup-root[1040]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:30:12.594206 initrd-setup-root[1047]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:30:12.601071 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:30:12.630336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:30:12.666361 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:12.648600 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:30:12.675848 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:30:12.700683 ignition[1114]: INFO : Ignition 2.19.0 Nov 1 00:30:12.700683 ignition[1114]: INFO : Stage: mount Nov 1 00:30:12.705501 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:30:12.730286 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:12.730286 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:12.730286 ignition[1114]: INFO : mount: mount passed Nov 1 00:30:12.730286 ignition[1114]: INFO : POST message to Packet Timeline Nov 1 00:30:12.730286 ignition[1114]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:13.518659 coreos-metadata[1012]: Nov 01 00:30:13.518 INFO Fetch successful Nov 1 00:30:13.554844 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 00:30:13.554902 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 00:30:13.675751 ignition[1114]: INFO : GET result: OK Nov 1 00:30:13.928997 coreos-metadata[1011]: Nov 01 00:30:13.928 INFO Fetch successful Nov 1 00:30:13.962392 coreos-metadata[1011]: Nov 01 00:30:13.962 INFO wrote hostname ci-4081.3.6-n-d37906c143 to /sysroot/etc/hostname Nov 1 00:30:13.963679 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:30:14.425388 ignition[1114]: INFO : Ignition finished successfully Nov 1 00:30:14.428482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:30:14.455408 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:30:14.467289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:30:14.533011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1141) Nov 1 00:30:14.533034 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:14.553520 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:14.571857 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:14.611498 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:14.611515 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:14.625606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:30:14.669355 ignition[1158]: INFO : Ignition 2.19.0 Nov 1 00:30:14.669355 ignition[1158]: INFO : Stage: files Nov 1 00:30:14.684378 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:14.684378 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:14.684378 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:30:14.684378 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:30:14.674251 unknown[1158]: wrote ssh authorized keys file for user: core Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:15.065436 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:30:15.276007 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:30:15.596789 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:15.596789 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: files passed Nov 1 00:30:15.626317 ignition[1158]: INFO : POST message to Packet Timeline Nov 1 00:30:15.626317 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:16.660557 ignition[1158]: INFO : GET result: OK Nov 1 00:30:17.389183 ignition[1158]: INFO : Ignition finished successfully Nov 1 00:30:17.392316 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:30:17.428402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:30:17.428895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:30:17.457508 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:30:17.457576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:30:17.492060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:30:17.510384 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:30:17.543317 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.543317 initrd-setup-root-after-ignition[1199]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.558322 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.548342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:30:17.621358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:30:17.621418 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:30:17.640553 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:30:17.662291 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:30:17.683604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:30:17.697512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:30:17.775441 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:30:17.801344 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:30:17.806515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:30:17.831709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:30:17.853886 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:30:17.872789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:30:17.873231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:30:17.901043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:30:17.922805 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:30:17.941809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:30:17.959844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:30:17.980700 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:30:18.001843 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:30:18.021704 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:30:18.042729 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:30:18.063734 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:30:18.083827 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:30:18.102603 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:30:18.103003 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:30:18.128825 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:30:18.148732 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:30:18.169586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:30:18.170045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:30:18.191598 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:30:18.191998 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:30:18.223685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:30:18.224158 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:30:18.243892 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:30:18.261582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:30:18.262019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:30:18.282700 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:30:18.301705 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:30:18.319818 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:30:18.320157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:30:18.339868 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:30:18.340205 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:30:18.362924 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:30:18.477286 ignition[1223]: INFO : Ignition 2.19.0 Nov 1 00:30:18.477286 ignition[1223]: INFO : Stage: umount Nov 1 00:30:18.477286 ignition[1223]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:18.477286 ignition[1223]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:18.477286 ignition[1223]: INFO : umount: umount passed Nov 1 00:30:18.477286 ignition[1223]: INFO : POST message to Packet Timeline Nov 1 00:30:18.477286 ignition[1223]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:18.363353 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:30:18.382749 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:30:18.383147 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:30:18.400801 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:30:18.401224 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:30:18.430229 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:30:18.445207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:30:18.445398 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:30:18.473261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:30:18.486225 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:30:18.486426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:30:18.509436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:30:18.509535 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:30:18.550421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:30:18.550892 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:30:18.550942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:30:18.573432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:30:18.573498 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:30:19.435042 ignition[1223]: INFO : GET result: OK Nov 1 00:30:19.881581 ignition[1223]: INFO : Ignition finished successfully Nov 1 00:30:19.884617 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:30:19.884914 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:30:19.901509 systemd[1]: Stopped target network.target - Network. Nov 1 00:30:19.916361 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:30:19.916548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:30:19.935526 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:30:19.935700 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:30:19.953528 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:30:19.953687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:30:19.973627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:30:19.973803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:30:19.992629 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:30:19.992803 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:30:20.012037 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:30:20.026251 systemd-networkd[938]: enp2s0f1np1: DHCPv6 lease lost Nov 1 00:30:20.032586 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:30:20.035309 systemd-networkd[938]: enp2s0f0np0: DHCPv6 lease lost Nov 1 00:30:20.051219 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:30:20.051504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:30:20.070397 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:30:20.070741 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:30:20.091759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:30:20.091881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:30:20.120382 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:30:20.130283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:30:20.130315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:30:20.150361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:30:20.150423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:30:20.171513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:30:20.171609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:30:20.191619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:30:20.191787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:30:20.212731 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:30:20.232294 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:30:20.232672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:30:20.263624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:30:20.263657 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:30:20.288197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:30:20.288227 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:30:20.306299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:30:20.306362 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:30:20.346320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:30:20.346492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:30:20.385293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:30:20.385443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:30:20.430390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:30:20.466176 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:30:20.466249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:30:20.488289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:30:20.700353 systemd-journald[265]: Received SIGTERM from PID 1 (systemd). Nov 1 00:30:20.488368 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:30:20.510392 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:30:20.510649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:30:20.530272 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:30:20.530530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:30:20.552429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:30:20.584521 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:30:20.630881 systemd[1]: Switching root. Nov 1 00:30:20.773258 systemd-journald[265]: Journal stopped Nov 1 00:29:56.026501 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:29:56.026515 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.026522 kernel: BIOS-provided physical RAM map: Nov 1 00:29:56.026526 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 00:29:56.026530 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 00:29:56.026533 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 00:29:56.026538 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 00:29:56.026542 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 00:29:56.026546 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f5ffff] usable Nov 1 00:29:56.026550 kernel: BIOS-e820: [mem 0x0000000061f60000-0x0000000061f60fff] ACPI NVS Nov 1 00:29:56.026554 kernel: BIOS-e820: [mem 0x0000000061f61000-0x0000000061f61fff] reserved Nov 1 00:29:56.026559 kernel: BIOS-e820: [mem 0x0000000061f62000-0x000000006c0c4fff] usable Nov 1 00:29:56.026563 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Nov 1 00:29:56.026567 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Nov 1 00:29:56.026573 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Nov 1 00:29:56.026577 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Nov 1 00:29:56.026583 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Nov 1 00:29:56.026587 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Nov 1 00:29:56.026592 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 00:29:56.026596 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 00:29:56.026600 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 00:29:56.026605 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 00:29:56.026609 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 00:29:56.026614 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Nov 1 00:29:56.026619 kernel: NX (Execute Disable) protection: active Nov 1 00:29:56.026623 kernel: APIC: Static calls initialized Nov 1 00:29:56.026628 kernel: SMBIOS 3.2.1 present. Nov 1 00:29:56.026633 kernel: DMI: Supermicro X11SCH-F/X11SCH-F, BIOS 1.5 11/17/2020 Nov 1 00:29:56.026638 kernel: tsc: Detected 3400.000 MHz processor Nov 1 00:29:56.026642 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 00:29:56.026647 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:29:56.026652 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:29:56.026657 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Nov 1 00:29:56.026662 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 00:29:56.026666 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:29:56.026671 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Nov 1 00:29:56.026676 kernel: Using GB pages for direct mapping Nov 1 00:29:56.026681 kernel: ACPI: Early table checksum verification disabled Nov 1 00:29:56.026686 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 00:29:56.026691 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 00:29:56.026697 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Nov 1 00:29:56.026702 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 00:29:56.026707 kernel: ACPI: FACS 0x000000006D762F80 000040 Nov 1 00:29:56.026713 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Nov 1 00:29:56.026718 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Nov 1 00:29:56.026723 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 00:29:56.026728 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 00:29:56.026733 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 00:29:56.026738 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 00:29:56.026743 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 00:29:56.026748 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 00:29:56.026754 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026759 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 00:29:56.026764 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 00:29:56.026769 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026774 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026779 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 00:29:56.026783 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 00:29:56.026789 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026794 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 00:29:56.026799 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 00:29:56.026804 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:29:56.026809 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 00:29:56.026814 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 00:29:56.026819 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 00:29:56.026824 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xefa 01072009 AMI 00010013) Nov 1 00:29:56.026829 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 00:29:56.026834 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 00:29:56.026840 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 00:29:56.026845 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 00:29:56.026850 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 00:29:56.026855 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Nov 1 00:29:56.026860 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Nov 1 00:29:56.026865 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Nov 1 00:29:56.026870 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Nov 1 00:29:56.026875 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Nov 1 00:29:56.026880 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Nov 1 00:29:56.026885 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Nov 1 00:29:56.026890 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Nov 1 00:29:56.026895 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Nov 1 00:29:56.026900 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Nov 1 00:29:56.026905 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Nov 1 00:29:56.026910 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Nov 1 00:29:56.026915 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Nov 1 00:29:56.026920 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Nov 1 00:29:56.026926 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Nov 1 00:29:56.026931 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Nov 1 00:29:56.026936 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Nov 1 00:29:56.026941 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Nov 1 00:29:56.026946 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Nov 1 00:29:56.026950 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Nov 1 00:29:56.026955 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Nov 1 00:29:56.026960 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Nov 1 00:29:56.026965 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Nov 1 00:29:56.026970 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Nov 1 00:29:56.026976 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Nov 1 00:29:56.026981 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Nov 1 00:29:56.026986 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Nov 1 00:29:56.026991 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Nov 1 00:29:56.026995 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Nov 1 00:29:56.027000 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Nov 1 00:29:56.027005 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Nov 1 00:29:56.027010 kernel: No NUMA configuration found Nov 1 00:29:56.027015 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Nov 1 00:29:56.027021 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Nov 1 00:29:56.027026 kernel: Zone ranges: Nov 1 00:29:56.027031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:29:56.027036 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:29:56.027041 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Nov 1 00:29:56.027046 kernel: Movable zone start for each node Nov 1 00:29:56.027051 kernel: Early memory node ranges Nov 1 00:29:56.027056 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 00:29:56.027061 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 00:29:56.027066 kernel: node 0: [mem 0x0000000040400000-0x0000000061f5ffff] Nov 1 00:29:56.027071 kernel: node 0: [mem 0x0000000061f62000-0x000000006c0c4fff] Nov 1 00:29:56.027076 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Nov 1 00:29:56.027081 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Nov 1 00:29:56.027086 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Nov 1 00:29:56.027099 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Nov 1 00:29:56.027104 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:29:56.027130 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 00:29:56.027136 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 00:29:56.027142 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 00:29:56.027164 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Nov 1 00:29:56.027169 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Nov 1 00:29:56.027175 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Nov 1 00:29:56.027180 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 00:29:56.027185 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 00:29:56.027190 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 00:29:56.027196 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 00:29:56.027202 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 00:29:56.027207 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 00:29:56.027212 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 00:29:56.027218 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 00:29:56.027223 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 00:29:56.027228 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 00:29:56.027233 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 00:29:56.027239 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 00:29:56.027244 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 00:29:56.027250 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 00:29:56.027255 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 00:29:56.027260 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 00:29:56.027265 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 00:29:56.027271 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 00:29:56.027276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:29:56.027281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:29:56.027286 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:29:56.027292 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:29:56.027298 kernel: TSC deadline timer available Nov 1 00:29:56.027303 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 00:29:56.027309 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Nov 1 00:29:56.027314 kernel: Booting paravirtualized kernel on bare hardware Nov 1 00:29:56.027319 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:29:56.027325 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 00:29:56.027330 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 00:29:56.027335 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 00:29:56.027341 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 00:29:56.027347 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.027353 kernel: random: crng init done Nov 1 00:29:56.027358 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 00:29:56.027363 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 00:29:56.027368 kernel: Fallback order for Node 0: 0 Nov 1 00:29:56.027374 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Nov 1 00:29:56.027379 kernel: Policy zone: Normal Nov 1 00:29:56.027384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:29:56.027390 kernel: software IO TLB: area num 16. Nov 1 00:29:56.027396 kernel: Memory: 32551316K/33281940K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 730364K reserved, 0K cma-reserved) Nov 1 00:29:56.027401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 00:29:56.027407 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:29:56.027412 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:29:56.027418 kernel: Dynamic Preempt: voluntary Nov 1 00:29:56.027423 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:29:56.027428 kernel: rcu: RCU event tracing is enabled. Nov 1 00:29:56.027434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 00:29:56.027440 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:29:56.027446 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:29:56.027451 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:29:56.027456 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:29:56.027461 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 00:29:56.027467 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 00:29:56.027472 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:29:56.027477 kernel: Console: colour dummy device 80x25 Nov 1 00:29:56.027482 kernel: printk: console [tty0] enabled Nov 1 00:29:56.027488 kernel: printk: console [ttyS1] enabled Nov 1 00:29:56.027494 kernel: ACPI: Core revision 20230628 Nov 1 00:29:56.027499 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Nov 1 00:29:56.027505 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:29:56.027510 kernel: DMAR: Host address width 39 Nov 1 00:29:56.027515 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Nov 1 00:29:56.027521 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Nov 1 00:29:56.027526 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 00:29:56.027531 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 00:29:56.027536 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Nov 1 00:29:56.027543 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Nov 1 00:29:56.027548 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Nov 1 00:29:56.027553 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 00:29:56.027559 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 00:29:56.027564 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 00:29:56.027569 kernel: x2apic enabled Nov 1 00:29:56.027575 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 00:29:56.027580 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:29:56.027585 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 00:29:56.027592 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 00:29:56.027597 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 00:29:56.027602 kernel: process: using mwait in idle threads Nov 1 00:29:56.027608 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:29:56.027613 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:29:56.027618 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:29:56.027623 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:29:56.027629 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:29:56.027634 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 00:29:56.027640 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 00:29:56.027646 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 00:29:56.027651 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:29:56.027656 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:29:56.027661 kernel: TAA: Mitigation: TSX disabled Nov 1 00:29:56.027667 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 00:29:56.027672 kernel: SRBDS: Mitigation: Microcode Nov 1 00:29:56.027677 kernel: GDS: Mitigation: Microcode Nov 1 00:29:56.027683 kernel: active return thunk: its_return_thunk Nov 1 00:29:56.027689 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:29:56.027694 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 00:29:56.027699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:29:56.027704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:29:56.027710 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:29:56.027715 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:29:56.027720 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:29:56.027725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:29:56.027731 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:29:56.027737 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:29:56.027742 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 00:29:56.027748 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:29:56.027753 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:29:56.027758 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:29:56.027764 kernel: landlock: Up and running. Nov 1 00:29:56.027769 kernel: SELinux: Initializing. Nov 1 00:29:56.027774 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.027780 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.027786 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 00:29:56.027791 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027797 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027802 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:29:56.027807 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 00:29:56.027813 kernel: ... version: 4 Nov 1 00:29:56.027818 kernel: ... bit width: 48 Nov 1 00:29:56.027823 kernel: ... generic registers: 4 Nov 1 00:29:56.027829 kernel: ... value mask: 0000ffffffffffff Nov 1 00:29:56.027835 kernel: ... max period: 00007fffffffffff Nov 1 00:29:56.027840 kernel: ... fixed-purpose events: 3 Nov 1 00:29:56.027845 kernel: ... event mask: 000000070000000f Nov 1 00:29:56.027850 kernel: signal: max sigframe size: 2032 Nov 1 00:29:56.027856 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 00:29:56.027861 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:29:56.027866 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:29:56.027871 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 00:29:56.027878 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:29:56.027883 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:29:56.027888 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 00:29:56.027894 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:29:56.027899 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 00:29:56.027904 kernel: smpboot: Max logical packages: 1 Nov 1 00:29:56.027910 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 00:29:56.027915 kernel: devtmpfs: initialized Nov 1 00:29:56.027920 kernel: x86/mm: Memory block size: 128MB Nov 1 00:29:56.027926 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f60000-0x61f60fff] (4096 bytes) Nov 1 00:29:56.027932 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Nov 1 00:29:56.027937 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:29:56.027943 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.027948 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:29:56.027953 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:29:56.027958 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:29:56.027964 kernel: audit: type=2000 audit(1761956990.122:1): state=initialized audit_enabled=0 res=1 Nov 1 00:29:56.027969 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:29:56.027975 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:29:56.027980 kernel: cpuidle: using governor menu Nov 1 00:29:56.027986 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:29:56.027991 kernel: dca service started, version 1.12.1 Nov 1 00:29:56.027996 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 00:29:56.028002 kernel: PCI: Using configuration type 1 for base access Nov 1 00:29:56.028007 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 00:29:56.028012 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:29:56.028017 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:29:56.028023 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:29:56.028029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:29:56.028034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:29:56.028039 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:29:56.028045 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:29:56.028050 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:29:56.028055 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 00:29:56.028060 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028066 kernel: ACPI: SSDT 0xFFFF968D01D18400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 00:29:56.028072 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028077 kernel: ACPI: SSDT 0xFFFF968D01D0A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 00:29:56.028082 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028088 kernel: ACPI: SSDT 0xFFFF968D00249B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 00:29:56.028096 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028101 kernel: ACPI: SSDT 0xFFFF968D0243E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 00:29:56.028125 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028130 kernel: ACPI: SSDT 0xFFFF968D0012D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 00:29:56.028135 kernel: ACPI: Dynamic OEM Table Load: Nov 1 00:29:56.028141 kernel: ACPI: SSDT 0xFFFF968D01D19000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 00:29:56.028161 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 00:29:56.028166 kernel: ACPI: Interpreter enabled Nov 1 00:29:56.028171 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:29:56.028177 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:29:56.028182 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 00:29:56.028187 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 00:29:56.028192 kernel: HEST: Table parsing has been initialized. Nov 1 00:29:56.028198 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 00:29:56.028203 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:29:56.028209 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:29:56.028214 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 00:29:56.028220 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 00:29:56.028225 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 00:29:56.028231 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 00:29:56.028236 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 00:29:56.028241 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 00:29:56.028246 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Nov 1 00:29:56.028251 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 00:29:56.028258 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 00:29:56.028263 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 00:29:56.028268 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 00:29:56.028273 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 00:29:56.028279 kernel: ACPI: \PIN_: New power resource Nov 1 00:29:56.028284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 00:29:56.028367 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:29:56.028450 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 00:29:56.028503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 00:29:56.028511 kernel: PCI host bridge to bus 0000:00 Nov 1 00:29:56.028561 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:29:56.028607 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:29:56.028651 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:29:56.028694 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Nov 1 00:29:56.028739 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 00:29:56.028782 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 00:29:56.028842 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 00:29:56.028899 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 00:29:56.028950 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.029005 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Nov 1 00:29:56.029057 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.029117 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Nov 1 00:29:56.029168 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Nov 1 00:29:56.029218 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Nov 1 00:29:56.029266 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Nov 1 00:29:56.029320 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 00:29:56.029369 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Nov 1 00:29:56.029425 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 00:29:56.029475 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Nov 1 00:29:56.029531 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 00:29:56.029581 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Nov 1 00:29:56.029630 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 00:29:56.029684 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 00:29:56.029741 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Nov 1 00:29:56.029794 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Nov 1 00:29:56.029847 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 00:29:56.029898 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.029951 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 00:29:56.030001 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.030056 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 00:29:56.030111 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Nov 1 00:29:56.030161 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 00:29:56.030216 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 00:29:56.030266 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Nov 1 00:29:56.030315 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 00:29:56.030372 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 00:29:56.030425 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Nov 1 00:29:56.030475 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 00:29:56.030527 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 00:29:56.030577 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Nov 1 00:29:56.030629 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Nov 1 00:29:56.030678 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Nov 1 00:29:56.030726 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Nov 1 00:29:56.030778 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Nov 1 00:29:56.030826 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Nov 1 00:29:56.030875 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 00:29:56.030929 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 00:29:56.030983 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031037 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 00:29:56.031088 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031147 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 00:29:56.031198 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031253 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 00:29:56.031306 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031361 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Nov 1 00:29:56.031411 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.031469 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 00:29:56.031519 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 00:29:56.031573 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 00:29:56.031628 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 00:29:56.031678 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Nov 1 00:29:56.031727 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 00:29:56.031780 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 00:29:56.031829 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 00:29:56.031880 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:29:56.031935 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 00:29:56.031990 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 00:29:56.032042 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Nov 1 00:29:56.032095 kernel: pci 0000:02:00.0: PME# supported from D3cold Nov 1 00:29:56.032148 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:29:56.032202 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:29:56.032258 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 00:29:56.032310 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 00:29:56.032364 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Nov 1 00:29:56.032414 kernel: pci 0000:02:00.1: PME# supported from D3cold Nov 1 00:29:56.032465 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 00:29:56.032515 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 00:29:56.032567 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 00:29:56.032617 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.032667 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.032719 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 00:29:56.032777 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 00:29:56.032829 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:29:56.032880 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Nov 1 00:29:56.032933 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 00:29:56.032984 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Nov 1 00:29:56.033038 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.033095 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 00:29:56.033147 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:29:56.033198 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.033253 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Nov 1 00:29:56.033305 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Nov 1 00:29:56.033356 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Nov 1 00:29:56.033407 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 00:29:56.033458 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Nov 1 00:29:56.033511 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Nov 1 00:29:56.033563 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 00:29:56.033612 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:29:56.033663 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.033712 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 00:29:56.033769 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 00:29:56.033821 kernel: pci 0000:07:00.0: enabling Extended Tags Nov 1 00:29:56.033875 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 00:29:56.033925 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:29:56.033976 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 00:29:56.034026 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.034076 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.034135 kernel: pci_bus 0000:08: extended config space not accessible Nov 1 00:29:56.034193 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 00:29:56.034250 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Nov 1 00:29:56.034304 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Nov 1 00:29:56.034358 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 00:29:56.034412 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:29:56.034464 kernel: pci 0000:08:00.0: supports D1 D2 Nov 1 00:29:56.034517 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 00:29:56.034568 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 00:29:56.034620 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.034673 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.034682 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 00:29:56.034688 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 00:29:56.034694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 00:29:56.034700 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 00:29:56.034705 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 00:29:56.034711 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 00:29:56.034717 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 00:29:56.034724 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 00:29:56.034730 kernel: iommu: Default domain type: Translated Nov 1 00:29:56.034736 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:29:56.034742 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:29:56.034747 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:29:56.034753 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 00:29:56.034759 kernel: e820: reserve RAM buffer [mem 0x61f60000-0x63ffffff] Nov 1 00:29:56.034764 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Nov 1 00:29:56.034770 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Nov 1 00:29:56.034776 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Nov 1 00:29:56.034829 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Nov 1 00:29:56.034883 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Nov 1 00:29:56.034935 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:29:56.034943 kernel: vgaarb: loaded Nov 1 00:29:56.034950 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 00:29:56.034956 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Nov 1 00:29:56.034961 kernel: clocksource: Switched to clocksource tsc-early Nov 1 00:29:56.034967 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:29:56.034975 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:29:56.034980 kernel: pnp: PnP ACPI init Nov 1 00:29:56.035034 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 00:29:56.035083 kernel: pnp 00:02: [dma 0 disabled] Nov 1 00:29:56.035158 kernel: pnp 00:03: [dma 0 disabled] Nov 1 00:29:56.035206 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 00:29:56.035251 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 00:29:56.035301 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 00:29:56.035347 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 00:29:56.035391 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 00:29:56.035436 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 00:29:56.035481 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 00:29:56.035525 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 00:29:56.035572 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 00:29:56.035619 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 00:29:56.035667 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 00:29:56.035712 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 00:29:56.035756 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 00:29:56.035799 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 00:29:56.035843 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 00:29:56.035889 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 00:29:56.035934 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 00:29:56.035983 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 00:29:56.035992 kernel: pnp: PnP ACPI: found 9 devices Nov 1 00:29:56.035998 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:29:56.036004 kernel: NET: Registered PF_INET protocol family Nov 1 00:29:56.036011 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036017 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.036024 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:29:56.036029 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036035 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:29:56.036041 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 00:29:56.036047 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.036052 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:29:56.036058 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:29:56.036064 kernel: NET: Registered PF_XDP protocol family Nov 1 00:29:56.036158 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Nov 1 00:29:56.036211 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Nov 1 00:29:56.036261 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Nov 1 00:29:56.036310 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 00:29:56.036361 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036415 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036465 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036517 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 00:29:56.036566 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 00:29:56.036616 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.036665 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.036714 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 00:29:56.036762 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 00:29:56.036814 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 00:29:56.036862 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.036910 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 00:29:56.036960 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 00:29:56.037007 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.037057 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 00:29:56.037133 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 00:29:56.037185 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.037236 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.037288 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 00:29:56.037337 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 00:29:56.037387 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.037433 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 00:29:56.037477 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:29:56.037521 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:29:56.037565 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:29:56.037608 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Nov 1 00:29:56.037652 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 00:29:56.037704 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Nov 1 00:29:56.037751 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 00:29:56.037801 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Nov 1 00:29:56.037848 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Nov 1 00:29:56.037898 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 00:29:56.037943 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Nov 1 00:29:56.037996 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 00:29:56.038041 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.038089 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Nov 1 00:29:56.038157 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Nov 1 00:29:56.038165 kernel: PCI: CLS 64 bytes, default 64 Nov 1 00:29:56.038172 kernel: DMAR: No ATSR found Nov 1 00:29:56.038177 kernel: DMAR: No SATC found Nov 1 00:29:56.038185 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Nov 1 00:29:56.038191 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Nov 1 00:29:56.038197 kernel: DMAR: IOMMU feature nwfs inconsistent Nov 1 00:29:56.038202 kernel: DMAR: IOMMU feature pasid inconsistent Nov 1 00:29:56.038208 kernel: DMAR: IOMMU feature eafs inconsistent Nov 1 00:29:56.038214 kernel: DMAR: IOMMU feature prs inconsistent Nov 1 00:29:56.038219 kernel: DMAR: IOMMU feature nest inconsistent Nov 1 00:29:56.038225 kernel: DMAR: IOMMU feature mts inconsistent Nov 1 00:29:56.038231 kernel: DMAR: IOMMU feature sc_support inconsistent Nov 1 00:29:56.038237 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Nov 1 00:29:56.038243 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 00:29:56.038249 kernel: DMAR: dmar1: Using Queued invalidation Nov 1 00:29:56.038297 kernel: pci 0000:00:02.0: Adding to iommu group 0 Nov 1 00:29:56.038347 kernel: pci 0000:00:00.0: Adding to iommu group 1 Nov 1 00:29:56.038397 kernel: pci 0000:00:01.0: Adding to iommu group 2 Nov 1 00:29:56.038445 kernel: pci 0000:00:01.1: Adding to iommu group 2 Nov 1 00:29:56.038495 kernel: pci 0000:00:08.0: Adding to iommu group 3 Nov 1 00:29:56.038544 kernel: pci 0000:00:12.0: Adding to iommu group 4 Nov 1 00:29:56.038595 kernel: pci 0000:00:14.0: Adding to iommu group 5 Nov 1 00:29:56.038644 kernel: pci 0000:00:14.2: Adding to iommu group 5 Nov 1 00:29:56.038692 kernel: pci 0000:00:15.0: Adding to iommu group 6 Nov 1 00:29:56.038741 kernel: pci 0000:00:15.1: Adding to iommu group 6 Nov 1 00:29:56.038789 kernel: pci 0000:00:16.0: Adding to iommu group 7 Nov 1 00:29:56.038838 kernel: pci 0000:00:16.1: Adding to iommu group 7 Nov 1 00:29:56.038886 kernel: pci 0000:00:16.4: Adding to iommu group 7 Nov 1 00:29:56.038935 kernel: pci 0000:00:17.0: Adding to iommu group 8 Nov 1 00:29:56.038986 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Nov 1 00:29:56.039036 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Nov 1 00:29:56.039085 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Nov 1 00:29:56.039181 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Nov 1 00:29:56.039230 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Nov 1 00:29:56.039278 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Nov 1 00:29:56.039326 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Nov 1 00:29:56.039375 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Nov 1 00:29:56.039426 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Nov 1 00:29:56.039477 kernel: pci 0000:02:00.0: Adding to iommu group 2 Nov 1 00:29:56.039528 kernel: pci 0000:02:00.1: Adding to iommu group 2 Nov 1 00:29:56.039578 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 00:29:56.039629 kernel: pci 0000:05:00.0: Adding to iommu group 17 Nov 1 00:29:56.039679 kernel: pci 0000:07:00.0: Adding to iommu group 18 Nov 1 00:29:56.039732 kernel: pci 0000:08:00.0: Adding to iommu group 18 Nov 1 00:29:56.039740 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 00:29:56.039748 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:29:56.039754 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Nov 1 00:29:56.039759 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Nov 1 00:29:56.039765 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 00:29:56.039771 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 00:29:56.039777 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 00:29:56.039782 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Nov 1 00:29:56.039836 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 00:29:56.039845 kernel: Initialise system trusted keyrings Nov 1 00:29:56.039852 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 00:29:56.039858 kernel: Key type asymmetric registered Nov 1 00:29:56.039863 kernel: Asymmetric key parser 'x509' registered Nov 1 00:29:56.039869 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:29:56.039874 kernel: io scheduler mq-deadline registered Nov 1 00:29:56.039880 kernel: io scheduler kyber registered Nov 1 00:29:56.039886 kernel: io scheduler bfq registered Nov 1 00:29:56.039935 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Nov 1 00:29:56.039987 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Nov 1 00:29:56.040036 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Nov 1 00:29:56.040086 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Nov 1 00:29:56.040181 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Nov 1 00:29:56.040230 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Nov 1 00:29:56.040279 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Nov 1 00:29:56.040332 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 00:29:56.040343 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 00:29:56.040349 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 00:29:56.040355 kernel: pstore: Using crash dump compression: deflate Nov 1 00:29:56.040360 kernel: pstore: Registered erst as persistent store backend Nov 1 00:29:56.040366 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:29:56.040372 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:29:56.040377 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:29:56.040383 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:29:56.040432 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 00:29:56.040442 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:29:56.040487 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 00:29:56.040532 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 00:29:56.040578 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T00:29:54 UTC (1761956994) Nov 1 00:29:56.040622 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 00:29:56.040630 kernel: intel_pstate: Intel P-state driver initializing Nov 1 00:29:56.040636 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 00:29:56.040642 kernel: intel_pstate: HWP enabled Nov 1 00:29:56.040649 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 00:29:56.040655 kernel: vesafb: scrolling: redraw Nov 1 00:29:56.040661 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 00:29:56.040666 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x00000000992d366d, using 768k, total 768k Nov 1 00:29:56.040672 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:29:56.040678 kernel: fb0: VESA VGA frame buffer device Nov 1 00:29:56.040684 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:29:56.040689 kernel: Segment Routing with IPv6 Nov 1 00:29:56.040695 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:29:56.040701 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:29:56.040707 kernel: Key type dns_resolver registered Nov 1 00:29:56.040713 kernel: microcode: Current revision: 0x000000fc Nov 1 00:29:56.040718 kernel: microcode: Updated early from: 0x000000de Nov 1 00:29:56.040724 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 00:29:56.040729 kernel: IPI shorthand broadcast: enabled Nov 1 00:29:56.040735 kernel: sched_clock: Marking stable (1737000659, 1373976193)->(4575120178, -1464143326) Nov 1 00:29:56.040741 kernel: registered taskstats version 1 Nov 1 00:29:56.040746 kernel: Loading compiled-in X.509 certificates Nov 1 00:29:56.040753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:29:56.040759 kernel: Key type .fscrypt registered Nov 1 00:29:56.040764 kernel: Key type fscrypt-provisioning registered Nov 1 00:29:56.040770 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:29:56.040776 kernel: ima: No architecture policies found Nov 1 00:29:56.040781 kernel: clk: Disabling unused clocks Nov 1 00:29:56.040787 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:29:56.040793 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:29:56.040798 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:29:56.040805 kernel: Run /init as init process Nov 1 00:29:56.040811 kernel: with arguments: Nov 1 00:29:56.040816 kernel: /init Nov 1 00:29:56.040822 kernel: with environment: Nov 1 00:29:56.040828 kernel: HOME=/ Nov 1 00:29:56.040833 kernel: TERM=linux Nov 1 00:29:56.040840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:29:56.040848 systemd[1]: Detected architecture x86-64. Nov 1 00:29:56.040854 systemd[1]: Running in initrd. Nov 1 00:29:56.040860 systemd[1]: No hostname configured, using default hostname. Nov 1 00:29:56.040866 systemd[1]: Hostname set to . Nov 1 00:29:56.040871 systemd[1]: Initializing machine ID from random generator. Nov 1 00:29:56.040877 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:29:56.040883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:29:56.040889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:29:56.040896 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:29:56.040902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:29:56.040908 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:29:56.040914 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:29:56.040921 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:29:56.040927 kernel: tsc: Refined TSC clocksource calibration: 3408.094 MHz Nov 1 00:29:56.040933 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x31202cc47c0, max_idle_ns: 440795231130 ns Nov 1 00:29:56.040939 kernel: clocksource: Switched to clocksource tsc Nov 1 00:29:56.040945 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:29:56.040951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:29:56.040957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:29:56.040963 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:29:56.040969 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:29:56.040975 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:29:56.040980 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:29:56.040986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:29:56.040993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:29:56.040999 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:29:56.041005 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:29:56.041011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:29:56.041016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:29:56.041023 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:29:56.041028 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:29:56.041034 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:29:56.041041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:29:56.041047 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:29:56.041053 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:29:56.041059 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:29:56.041075 systemd-journald[265]: Collecting audit messages is disabled. Nov 1 00:29:56.041093 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:29:56.041100 systemd-journald[265]: Journal started Nov 1 00:29:56.041133 systemd-journald[265]: Runtime Journal (/run/log/journal/ad837761db5d4bafb7fc71bea222888d) is 8.0M, max 636.6M, 628.6M free. Nov 1 00:29:56.054082 systemd-modules-load[266]: Inserted module 'overlay' Nov 1 00:29:56.076118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:56.104888 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:29:56.167136 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:29:56.167168 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:29:56.167178 kernel: Bridge firewalling registered Nov 1 00:29:56.163278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:29:56.167068 systemd-modules-load[266]: Inserted module 'br_netfilter' Nov 1 00:29:56.188415 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:29:56.213475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:29:56.231479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:56.268351 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:56.268780 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:29:56.269130 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:29:56.269506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:29:56.273843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:29:56.274549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:29:56.274686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:29:56.275384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:29:56.276357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:29:56.280042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:29:56.284356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:56.295808 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:29:56.297892 systemd-resolved[299]: Positive Trust Anchors: Nov 1 00:29:56.297905 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:29:56.297945 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:29:56.299965 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 1 00:29:56.318344 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:29:56.339295 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:29:56.449197 dracut-cmdline[310]: dracut-dracut-053 Nov 1 00:29:56.449197 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:29:56.516097 kernel: SCSI subsystem initialized Nov 1 00:29:56.539146 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:29:56.562141 kernel: iscsi: registered transport (tcp) Nov 1 00:29:56.594348 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:29:56.594365 kernel: QLogic iSCSI HBA Driver Nov 1 00:29:56.627563 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:29:56.659332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:29:56.721816 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:29:56.721835 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:29:56.741458 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:29:56.801152 kernel: raid6: avx2x4 gen() 53021 MB/s Nov 1 00:29:56.833154 kernel: raid6: avx2x2 gen() 53006 MB/s Nov 1 00:29:56.869469 kernel: raid6: avx2x1 gen() 45279 MB/s Nov 1 00:29:56.869488 kernel: raid6: using algorithm avx2x4 gen() 53021 MB/s Nov 1 00:29:56.916535 kernel: raid6: .... xor() 19905 MB/s, rmw enabled Nov 1 00:29:56.916555 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:29:56.958146 kernel: xor: automatically using best checksumming function avx Nov 1 00:29:57.076100 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:29:57.081498 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:29:57.119393 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:29:57.126792 systemd-udevd[496]: Using default interface naming scheme 'v255'. Nov 1 00:29:57.130219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:29:57.174361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:29:57.218117 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Nov 1 00:29:57.244712 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:29:57.266387 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:29:57.329227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:29:57.354149 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:29:57.354176 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:29:57.363278 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:29:57.390100 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:29:57.400102 kernel: PTP clock support registered Nov 1 00:29:57.400148 kernel: libata version 3.00 loaded. Nov 1 00:29:57.401400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:29:57.498152 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:29:57.498168 kernel: ACPI: bus type USB registered Nov 1 00:29:57.498178 kernel: usbcore: registered new interface driver usbfs Nov 1 00:29:57.498187 kernel: usbcore: registered new interface driver hub Nov 1 00:29:57.498196 kernel: usbcore: registered new device driver usb Nov 1 00:29:57.498205 kernel: AES CTR mode by8 optimization enabled Nov 1 00:29:57.401444 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:57.502095 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 00:29:57.510163 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:57.556657 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Nov 1 00:29:57.556763 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 00:29:57.556832 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 00:29:57.530152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:29:57.608140 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 00:29:57.608154 kernel: scsi host0: ahci Nov 1 00:29:57.608245 kernel: scsi host1: ahci Nov 1 00:29:57.608265 kernel: igb 0000:04:00.0: added PHC on eth0 Nov 1 00:29:57.530219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:58.060620 kernel: scsi host2: ahci Nov 1 00:29:58.060759 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:29:58.060841 kernel: scsi host3: ahci Nov 1 00:29:58.060911 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:dc Nov 1 00:29:58.060979 kernel: scsi host4: ahci Nov 1 00:29:58.061042 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Nov 1 00:29:58.061112 kernel: scsi host5: ahci Nov 1 00:29:58.061174 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:29:58.061240 kernel: scsi host6: ahci Nov 1 00:29:58.061301 kernel: igb 0000:05:00.0: added PHC on eth1 Nov 1 00:29:58.061370 kernel: scsi host7: ahci Nov 1 00:29:58.061436 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 00:29:58.061499 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 129 Nov 1 00:29:58.061508 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:dd Nov 1 00:29:58.061570 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 129 Nov 1 00:29:58.061579 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Nov 1 00:29:58.061641 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 129 Nov 1 00:29:58.061650 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 00:29:58.061713 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 129 Nov 1 00:29:58.061721 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 129 Nov 1 00:29:58.061728 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 129 Nov 1 00:29:58.061736 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 129 Nov 1 00:29:58.061743 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 129 Nov 1 00:29:58.061750 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:29:58.061815 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Nov 1 00:29:58.061881 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 00:29:58.061944 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:29:58.062007 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 00:29:58.062069 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 00:29:57.594645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:58.148498 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 00:29:58.148580 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 00:29:58.148647 kernel: hub 1-0:1.0: USB hub found Nov 1 00:29:58.148718 kernel: hub 1-0:1.0: 16 ports detected Nov 1 00:29:58.148779 kernel: hub 2-0:1.0: USB hub found Nov 1 00:29:58.148843 kernel: hub 2-0:1.0: 10 ports detected Nov 1 00:29:58.063250 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:29:58.171351 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:29:58.181789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:29:58.181816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:29:58.181842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:29:58.192245 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:29:58.434995 kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435012 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 00:29:58.435119 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:29:58.435133 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Nov 1 00:29:58.435214 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435224 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 00:29:58.435233 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:29:58.435243 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435252 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435261 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.435270 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 00:29:58.435289 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 00:29:58.435300 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:29:58.429827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:29:58.465142 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:29:58.465160 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 00:29:58.472519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:29:58.670009 kernel: ata1.00: Features: NCQ-prio Nov 1 00:29:58.670023 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:29:58.670124 kernel: ata2.00: Features: NCQ-prio Nov 1 00:29:58.670132 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Nov 1 00:29:58.670203 kernel: ata1.00: configured for UDMA/133 Nov 1 00:29:58.670211 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 00:29:58.670276 kernel: ata2.00: configured for UDMA/133 Nov 1 00:29:58.670284 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:29:58.670356 kernel: hub 1-14:1.0: USB hub found Nov 1 00:29:58.670432 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 00:29:58.670503 kernel: hub 1-14:1.0: 4 ports detected Nov 1 00:29:58.670583 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Nov 1 00:29:58.692677 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:58.692696 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:29:58.692704 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:29:58.722061 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 00:29:58.722186 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 00:29:58.722266 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 00:29:58.722344 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:29:58.722420 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 00:29:58.723145 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Nov 1 00:29:58.732057 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 00:29:58.732147 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:29:58.732222 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 00:29:58.742074 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 00:29:58.742157 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:29:58.751296 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:29:59.135031 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.135047 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Nov 1 00:29:59.135147 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 00:29:59.135218 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:29:59.135227 kernel: GPT:9289727 != 937703087 Nov 1 00:29:59.135234 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:29:59.135241 kernel: GPT:9289727 != 937703087 Nov 1 00:29:59.135248 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:29:59.135255 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.135262 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 00:29:59.135327 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Nov 1 00:29:59.135396 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 00:29:59.135404 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 00:29:59.135517 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 00:29:59.135583 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:29:59.135648 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (564) Nov 1 00:29:59.135657 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (541) Nov 1 00:29:59.135666 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Nov 1 00:29:59.190098 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:29:59.209140 kernel: usbcore: registered new interface driver usbhid Nov 1 00:29:59.209161 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Nov 1 00:29:59.209249 kernel: usbhid: USB HID core driver Nov 1 00:29:59.216987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 00:29:59.240096 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 00:29:59.243531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 00:29:59.297456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:29:59.321260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 00:29:59.396906 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 00:29:59.397007 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 00:29:59.397016 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 00:29:59.348332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 00:29:59.432973 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 00:29:59.458178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:29:59.492074 disk-uuid[737]: Primary Header is updated. Nov 1 00:29:59.492074 disk-uuid[737]: Secondary Entries is updated. Nov 1 00:29:59.492074 disk-uuid[737]: Secondary Header is updated. Nov 1 00:29:59.561183 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.561194 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.561201 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.561212 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:29:59.587770 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:29:59.608096 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:30:00.587815 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 00:30:00.608020 disk-uuid[738]: The operation has completed successfully. Nov 1 00:30:00.617228 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 00:30:00.648417 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:30:00.648483 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:30:00.685387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:30:00.725348 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:30:00.725363 sh[755]: Success Nov 1 00:30:00.768598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:30:00.790448 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:30:00.799475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:30:00.855855 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:30:00.855898 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:00.877937 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:30:00.897714 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:30:00.916425 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:30:00.957148 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:30:00.959630 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:30:00.968557 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:30:00.974316 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:30:01.090934 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:01.090952 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:01.090960 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:01.090967 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:01.090974 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:01.078299 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:30:01.137330 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:01.124622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:30:01.147516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:30:01.178362 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:30:01.189039 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:30:01.242907 ignition[936]: Ignition 2.19.0 Nov 1 00:30:01.242914 ignition[936]: Stage: fetch-offline Nov 1 00:30:01.245357 unknown[936]: fetched base config from "system" Nov 1 00:30:01.242938 ignition[936]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:01.245361 unknown[936]: fetched user config from "system" Nov 1 00:30:01.242945 ignition[936]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:01.246334 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:30:01.243019 ignition[936]: parsed url from cmdline: "" Nov 1 00:30:01.248931 systemd-networkd[938]: lo: Link UP Nov 1 00:30:01.243022 ignition[936]: no config URL provided Nov 1 00:30:01.248934 systemd-networkd[938]: lo: Gained carrier Nov 1 00:30:01.243026 ignition[936]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:30:01.251647 systemd-networkd[938]: Enumeration completed Nov 1 00:30:01.243061 ignition[936]: parsing config with SHA512: 79211332c7063c460813a592a7b77bf791132971ea4fc664d1f7873475cbf514cb067fae4caca4fbfd34f774428fdc62ecb2657f92c809dc9c55e5329e11dbeb Nov 1 00:30:01.252575 systemd-networkd[938]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.245575 ignition[936]: fetch-offline: fetch-offline passed Nov 1 00:30:01.261480 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:30:01.245577 ignition[936]: POST message to Packet Timeline Nov 1 00:30:01.278581 systemd[1]: Reached target network.target - Network. Nov 1 00:30:01.245580 ignition[936]: POST Status error: resource requires networking Nov 1 00:30:01.281689 systemd-networkd[938]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.245616 ignition[936]: Ignition finished successfully Nov 1 00:30:01.293341 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:30:01.322879 ignition[951]: Ignition 2.19.0 Nov 1 00:30:01.301368 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:30:01.322883 ignition[951]: Stage: kargs Nov 1 00:30:01.495325 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 1 00:30:01.310710 systemd-networkd[938]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.322996 ignition[951]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:01.490685 systemd-networkd[938]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:30:01.323003 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:01.323548 ignition[951]: kargs: kargs passed Nov 1 00:30:01.323551 ignition[951]: POST message to Packet Timeline Nov 1 00:30:01.323560 ignition[951]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:01.324002 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40902->[::1]:53: read: connection refused Nov 1 00:30:01.524767 ignition[951]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 00:30:01.525739 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34548->[::1]:53: read: connection refused Nov 1 00:30:01.735129 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 00:30:01.736335 systemd-networkd[938]: eno1: Link UP Nov 1 00:30:01.736530 systemd-networkd[938]: eno2: Link UP Nov 1 00:30:01.736642 systemd-networkd[938]: enp2s0f0np0: Link UP Nov 1 00:30:01.736770 systemd-networkd[938]: enp2s0f0np0: Gained carrier Nov 1 00:30:01.746252 systemd-networkd[938]: enp2s0f1np1: Link UP Nov 1 00:30:01.766240 systemd-networkd[938]: enp2s0f0np0: DHCPv4 address 139.178.94.145/31, gateway 139.178.94.144 acquired from 145.40.83.140 Nov 1 00:30:01.926042 ignition[951]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 00:30:01.927127 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59876->[::1]:53: read: connection refused Nov 1 00:30:02.544759 systemd-networkd[938]: enp2s0f1np1: Gained carrier Nov 1 00:30:02.727545 ignition[951]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 00:30:02.728677 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48192->[::1]:53: read: connection refused Nov 1 00:30:03.120601 systemd-networkd[938]: enp2s0f0np0: Gained IPv6LL Nov 1 00:30:04.208568 systemd-networkd[938]: enp2s0f1np1: Gained IPv6LL Nov 1 00:30:04.330354 ignition[951]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 00:30:04.331466 ignition[951]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60921->[::1]:53: read: connection refused Nov 1 00:30:07.534998 ignition[951]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 00:30:08.619421 ignition[951]: GET result: OK Nov 1 00:30:09.817945 ignition[951]: Ignition finished successfully Nov 1 00:30:09.823819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:30:09.846352 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:30:09.853856 ignition[969]: Ignition 2.19.0 Nov 1 00:30:09.853861 ignition[969]: Stage: disks Nov 1 00:30:09.853972 ignition[969]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:09.853979 ignition[969]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:09.854512 ignition[969]: disks: disks passed Nov 1 00:30:09.854515 ignition[969]: POST message to Packet Timeline Nov 1 00:30:09.854524 ignition[969]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:11.129978 ignition[969]: GET result: OK Nov 1 00:30:11.882644 ignition[969]: Ignition finished successfully Nov 1 00:30:11.886308 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:30:11.902409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:30:11.920348 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:30:11.942524 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:30:11.963466 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:30:11.984483 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:30:12.022357 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:30:12.058268 systemd-fsck[985]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:30:12.068647 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:30:12.076416 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:30:12.203094 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:30:12.203302 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:30:12.213608 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:30:12.250268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:30:12.259057 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:30:12.400357 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (994) Nov 1 00:30:12.400371 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:12.400380 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:12.400387 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:12.400394 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:12.400401 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:12.392206 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:30:12.411654 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 00:30:12.433336 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:30:12.483340 coreos-metadata[1011]: Nov 01 00:30:12.458 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:12.433353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:30:12.434301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:30:12.531185 coreos-metadata[1012]: Nov 01 00:30:12.470 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:12.464360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:30:12.502215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:30:12.562218 initrd-setup-root[1026]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:30:12.573164 initrd-setup-root[1033]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:30:12.584123 initrd-setup-root[1040]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:30:12.594206 initrd-setup-root[1047]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:30:12.601071 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:30:12.630336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:30:12.666361 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:12.648600 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:30:12.675848 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:30:12.700683 ignition[1114]: INFO : Ignition 2.19.0 Nov 1 00:30:12.700683 ignition[1114]: INFO : Stage: mount Nov 1 00:30:12.705501 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:30:12.730286 ignition[1114]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:12.730286 ignition[1114]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:12.730286 ignition[1114]: INFO : mount: mount passed Nov 1 00:30:12.730286 ignition[1114]: INFO : POST message to Packet Timeline Nov 1 00:30:12.730286 ignition[1114]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:13.518659 coreos-metadata[1012]: Nov 01 00:30:13.518 INFO Fetch successful Nov 1 00:30:13.554844 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 00:30:13.554902 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 00:30:13.675751 ignition[1114]: INFO : GET result: OK Nov 1 00:30:13.928997 coreos-metadata[1011]: Nov 01 00:30:13.928 INFO Fetch successful Nov 1 00:30:13.962392 coreos-metadata[1011]: Nov 01 00:30:13.962 INFO wrote hostname ci-4081.3.6-n-d37906c143 to /sysroot/etc/hostname Nov 1 00:30:13.963679 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:30:14.425388 ignition[1114]: INFO : Ignition finished successfully Nov 1 00:30:14.428482 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:30:14.455408 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:30:14.467289 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:30:14.533011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1141) Nov 1 00:30:14.533034 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:30:14.553520 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:30:14.571857 kernel: BTRFS info (device sdb6): using free space tree Nov 1 00:30:14.611498 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 00:30:14.611515 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 00:30:14.625606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:30:14.669355 ignition[1158]: INFO : Ignition 2.19.0 Nov 1 00:30:14.669355 ignition[1158]: INFO : Stage: files Nov 1 00:30:14.684378 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:14.684378 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:14.684378 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:30:14.684378 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:30:14.684378 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:30:14.674251 unknown[1158]: wrote ssh authorized keys file for user: core Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:14.817342 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:15.065436 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:30:15.276007 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:30:15.596789 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:30:15.596789 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:30:15.626317 ignition[1158]: INFO : files: files passed Nov 1 00:30:15.626317 ignition[1158]: INFO : POST message to Packet Timeline Nov 1 00:30:15.626317 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:16.660557 ignition[1158]: INFO : GET result: OK Nov 1 00:30:17.389183 ignition[1158]: INFO : Ignition finished successfully Nov 1 00:30:17.392316 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:30:17.428402 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:30:17.428895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:30:17.457508 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:30:17.457576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:30:17.492060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:30:17.510384 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:30:17.543317 initrd-setup-root-after-ignition[1199]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.543317 initrd-setup-root-after-ignition[1199]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.558322 initrd-setup-root-after-ignition[1203]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:30:17.548342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:30:17.621358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:30:17.621418 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:30:17.640553 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:30:17.662291 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:30:17.683604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:30:17.697512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:30:17.775441 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:30:17.801344 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:30:17.806515 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:30:17.831709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:30:17.853886 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:30:17.872789 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:30:17.873231 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:30:17.901043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:30:17.922805 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:30:17.941809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:30:17.959844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:30:17.980700 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:30:18.001843 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:30:18.021704 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:30:18.042729 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:30:18.063734 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:30:18.083827 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:30:18.102603 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:30:18.103003 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:30:18.128825 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:30:18.148732 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:30:18.169586 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:30:18.170045 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:30:18.191598 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:30:18.191998 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:30:18.223685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:30:18.224158 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:30:18.243892 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:30:18.261582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:30:18.262019 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:30:18.282700 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:30:18.301705 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:30:18.319818 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:30:18.320157 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:30:18.339868 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:30:18.340205 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:30:18.362924 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:30:18.477286 ignition[1223]: INFO : Ignition 2.19.0 Nov 1 00:30:18.477286 ignition[1223]: INFO : Stage: umount Nov 1 00:30:18.477286 ignition[1223]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:30:18.477286 ignition[1223]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 00:30:18.477286 ignition[1223]: INFO : umount: umount passed Nov 1 00:30:18.477286 ignition[1223]: INFO : POST message to Packet Timeline Nov 1 00:30:18.477286 ignition[1223]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 00:30:18.363353 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:30:18.382749 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:30:18.383147 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:30:18.400801 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:30:18.401224 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:30:18.430229 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:30:18.445207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:30:18.445398 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:30:18.473261 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:30:18.486225 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:30:18.486426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:30:18.509436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:30:18.509535 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:30:18.550421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:30:18.550892 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:30:18.550942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:30:18.573432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:30:18.573498 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:30:19.435042 ignition[1223]: INFO : GET result: OK Nov 1 00:30:19.881581 ignition[1223]: INFO : Ignition finished successfully Nov 1 00:30:19.884617 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:30:19.884914 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:30:19.901509 systemd[1]: Stopped target network.target - Network. Nov 1 00:30:19.916361 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:30:19.916548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:30:19.935526 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:30:19.935700 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:30:19.953528 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:30:19.953687 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:30:19.973627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:30:19.973803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:30:19.992629 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:30:19.992803 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:30:20.012037 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:30:20.026251 systemd-networkd[938]: enp2s0f1np1: DHCPv6 lease lost Nov 1 00:30:20.032586 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:30:20.035309 systemd-networkd[938]: enp2s0f0np0: DHCPv6 lease lost Nov 1 00:30:20.051219 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:30:20.051504 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:30:20.070397 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:30:20.070741 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:30:20.091759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:30:20.091881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:30:20.120382 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:30:20.130283 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:30:20.130315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:30:20.150361 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:30:20.150423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:30:20.171513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:30:20.171609 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:30:20.191619 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:30:20.191787 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:30:20.212731 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:30:20.232294 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:30:20.232672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:30:20.263624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:30:20.263657 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:30:20.288197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:30:20.288227 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:30:20.306299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:30:20.306362 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:30:20.346320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:30:20.346492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:30:20.385293 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:30:20.385443 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:30:20.430390 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:30:20.466176 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:30:20.466249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:30:20.488289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:30:20.700353 systemd-journald[265]: Received SIGTERM from PID 1 (systemd). Nov 1 00:30:20.488368 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:30:20.510392 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:30:20.510649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:30:20.530272 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:30:20.530530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:30:20.552429 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:30:20.584521 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:30:20.630881 systemd[1]: Switching root. Nov 1 00:30:20.773258 systemd-journald[265]: Journal stopped Nov 1 00:30:23.496701 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:30:23.496717 kernel: SELinux: policy capability open_perms=1 Nov 1 00:30:23.496725 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:30:23.496732 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:30:23.496740 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:30:23.496745 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:30:23.496752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:30:23.496757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:30:23.496763 kernel: audit: type=1403 audit(1761957021.033:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:30:23.496770 systemd[1]: Successfully loaded SELinux policy in 163.079ms. Nov 1 00:30:23.496779 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.115ms. Nov 1 00:30:23.496786 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:30:23.496793 systemd[1]: Detected architecture x86-64. Nov 1 00:30:23.496799 systemd[1]: Detected first boot. Nov 1 00:30:23.496806 systemd[1]: Hostname set to . Nov 1 00:30:23.496814 systemd[1]: Initializing machine ID from random generator. Nov 1 00:30:23.496821 zram_generator::config[1274]: No configuration found. Nov 1 00:30:23.496828 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:30:23.496835 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:30:23.496841 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:30:23.496848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:30:23.496856 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:30:23.496863 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:30:23.496870 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:30:23.496877 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:30:23.496884 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:30:23.496890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:30:23.496897 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:30:23.496905 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:30:23.496912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:30:23.496919 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:30:23.496926 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:30:23.496933 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:30:23.496940 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:30:23.496947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:30:23.496954 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 1 00:30:23.496962 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:30:23.496969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:30:23.496975 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:30:23.496982 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:30:23.496991 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:30:23.496998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:30:23.497005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:30:23.497012 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:30:23.497020 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:30:23.497027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:30:23.497035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:30:23.497042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:30:23.497049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:30:23.497056 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:30:23.497065 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:30:23.497072 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:30:23.497079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:30:23.497087 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:30:23.497104 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:23.497112 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:30:23.497119 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:30:23.497128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:30:23.497135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:30:23.497143 systemd[1]: Reached target machines.target - Containers. Nov 1 00:30:23.497150 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:30:23.497232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:30:23.497240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:30:23.497247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:30:23.497254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:30:23.497262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:30:23.497270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:30:23.497277 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:30:23.497284 kernel: ACPI: bus type drm_connector registered Nov 1 00:30:23.497291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:30:23.497298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:30:23.497305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:30:23.497312 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:30:23.497319 kernel: fuse: init (API version 7.39) Nov 1 00:30:23.497327 kernel: loop: module loaded Nov 1 00:30:23.497334 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:30:23.497341 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:30:23.497348 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:30:23.497364 systemd-journald[1377]: Collecting audit messages is disabled. Nov 1 00:30:23.497380 systemd-journald[1377]: Journal started Nov 1 00:30:23.497395 systemd-journald[1377]: Runtime Journal (/run/log/journal/f8b3fdbf882f4a518c39ba297be503ba) is 8.0M, max 636.6M, 628.6M free. Nov 1 00:30:21.594036 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:30:21.611984 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Nov 1 00:30:21.612210 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:30:23.525152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:30:23.558116 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:30:23.592277 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:30:23.625173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:30:23.658277 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:30:23.658307 systemd[1]: Stopped verity-setup.service. Nov 1 00:30:23.719139 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:23.740284 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:30:23.749696 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:30:23.759386 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:30:23.770385 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:30:23.780361 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:30:23.790348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:30:23.800330 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:30:23.810464 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:30:23.821524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:30:23.832671 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:30:23.832890 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:30:23.845063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:23.845484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:30:23.858053 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:30:23.858465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:30:23.870037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:23.870438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:30:23.883042 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:30:23.883562 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:30:23.895050 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:23.895460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:30:23.906020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:30:23.917987 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:30:23.930981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:30:23.943987 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:30:23.980326 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:30:24.003361 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:30:24.017284 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:30:24.027324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:30:24.027341 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:30:24.027902 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:30:24.048931 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:30:24.061088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:30:24.071374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:30:24.090217 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:30:24.102831 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:30:24.114228 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:24.114826 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:30:24.121823 systemd-journald[1377]: Time spent on flushing to /var/log/journal/f8b3fdbf882f4a518c39ba297be503ba is 13.348ms for 1396 entries. Nov 1 00:30:24.121823 systemd-journald[1377]: System Journal (/var/log/journal/f8b3fdbf882f4a518c39ba297be503ba) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:30:24.162700 systemd-journald[1377]: Received client request to flush runtime journal. Nov 1 00:30:24.130701 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:30:24.131571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:30:24.153889 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:30:24.165926 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:30:24.181833 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:30:24.189161 kernel: loop0: detected capacity change from 0 to 8 Nov 1 00:30:24.206255 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:30:24.214097 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:30:24.224305 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:30:24.235320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:30:24.250680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:30:24.264157 kernel: loop1: detected capacity change from 0 to 140768 Nov 1 00:30:24.274334 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:30:24.285317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:30:24.295332 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:30:24.308175 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:30:24.339101 kernel: loop2: detected capacity change from 0 to 219144 Nov 1 00:30:24.340302 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:30:24.351900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:30:24.363666 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:30:24.364127 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:30:24.375715 udevadm[1413]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:30:24.379844 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Nov 1 00:30:24.379859 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Nov 1 00:30:24.382318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:30:24.427151 kernel: loop3: detected capacity change from 0 to 142488 Nov 1 00:30:24.486004 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:30:24.512329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:30:24.532099 kernel: loop4: detected capacity change from 0 to 8 Nov 1 00:30:24.535553 ldconfig[1403]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:30:24.537142 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:30:24.540821 systemd-udevd[1434]: Using default interface naming scheme 'v255'. Nov 1 00:30:24.551146 kernel: loop5: detected capacity change from 0 to 140768 Nov 1 00:30:24.560274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:30:24.593105 kernel: loop6: detected capacity change from 0 to 219144 Nov 1 00:30:24.593164 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1498) Nov 1 00:30:24.602678 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 1 00:30:24.614101 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:30:24.614294 kernel: IPMI message handler: version 39.2 Nov 1 00:30:24.642098 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 00:30:24.651102 kernel: loop7: detected capacity change from 0 to 142488 Nov 1 00:30:24.656099 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 00:30:24.678624 (sd-merge)[1435]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 1 00:30:24.704281 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:30:24.678913 (sd-merge)[1435]: Merged extensions into '/usr'. Nov 1 00:30:24.704257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:30:24.721103 kernel: ipmi device interface Nov 1 00:30:24.721157 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 00:30:24.721453 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:30:24.745081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 00:30:24.768138 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 00:30:24.788046 systemd[1]: Reloading requested from client PID 1408 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:30:24.788053 systemd[1]: Reloading... Nov 1 00:30:24.805101 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 00:30:24.827165 zram_generator::config[1551]: No configuration found. Nov 1 00:30:24.840129 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 00:30:24.840303 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 00:30:24.916099 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 00:30:24.916145 kernel: ipmi_si: IPMI System Interface driver Nov 1 00:30:24.948724 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 00:30:24.955191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:24.966885 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 00:30:24.983655 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 00:30:25.000347 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 00:30:25.009101 systemd[1]: Reloading finished in 220 ms. Nov 1 00:30:25.018761 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 00:30:25.038073 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 00:30:25.054243 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 00:30:25.074460 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 00:30:25.110098 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Nov 1 00:30:25.154097 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 00:30:25.175800 kernel: intel_rapl_common: Found RAPL domain package Nov 1 00:30:25.175829 kernel: intel_rapl_common: Found RAPL domain core Nov 1 00:30:25.175848 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Nov 1 00:30:25.175951 kernel: intel_rapl_common: Found RAPL domain uncore Nov 1 00:30:25.175961 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 00:30:25.257757 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:30:25.282221 systemd[1]: Starting ensure-sysext.service... Nov 1 00:30:25.289838 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:30:25.301966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:30:25.323362 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:30:25.341233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:30:25.348126 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 00:30:25.349455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:30:25.363056 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:30:25.363457 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:30:25.364404 systemd-tmpfiles[1621]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:30:25.364721 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Nov 1 00:30:25.364792 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Nov 1 00:30:25.367900 systemd-tmpfiles[1621]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:30:25.367906 systemd-tmpfiles[1621]: Skipping /boot Nov 1 00:30:25.371127 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 00:30:25.373998 systemd-tmpfiles[1621]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:30:25.374003 systemd-tmpfiles[1621]: Skipping /boot Nov 1 00:30:25.381423 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:30:25.387273 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:30:25.387461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:30:25.388286 systemd[1]: Reloading requested from client PID 1617 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:30:25.388297 systemd[1]: Reloading... Nov 1 00:30:25.432103 zram_generator::config[1664]: No configuration found. Nov 1 00:30:25.439241 systemd-networkd[1521]: lo: Link UP Nov 1 00:30:25.439245 systemd-networkd[1521]: lo: Gained carrier Nov 1 00:30:25.441976 systemd-networkd[1521]: bond0: netdev ready Nov 1 00:30:25.442925 systemd-networkd[1521]: Enumeration completed Nov 1 00:30:25.447576 systemd-networkd[1521]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:04:87:b4.network. Nov 1 00:30:25.497949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:30:25.552719 systemd[1]: Reloading finished in 164 ms. Nov 1 00:30:25.567571 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:30:25.590335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:30:25.606082 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:30:25.616135 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:30:25.633076 augenrules[1742]: No rules Nov 1 00:30:25.639626 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:30:25.651924 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:30:25.661341 lvm[1747]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:30:25.671574 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:30:25.683188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:30:25.693872 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:30:25.706846 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:30:25.717511 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:30:25.729351 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:30:25.740397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:30:25.768450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:30:25.779760 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:25.780521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:30:25.792070 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:30:25.803883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:30:25.805765 lvm[1756]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:30:25.813889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:30:25.820475 systemd-resolved[1750]: Positive Trust Anchors: Nov 1 00:30:25.820480 systemd-resolved[1750]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:30:25.820507 systemd-resolved[1750]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:30:25.823282 systemd-resolved[1750]: Using system hostname 'ci-4081.3.6-n-d37906c143'. Nov 1 00:30:25.830887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:30:25.840583 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:30:25.841150 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 1 00:30:25.841396 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:30:25.863081 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:25.863137 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Nov 1 00:30:25.863220 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:25.864419 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:25.864498 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:30:25.864643 systemd-networkd[1521]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:04:87:b5.network. Nov 1 00:30:25.887372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:30:25.898988 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:30:25.912056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:25.912501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:30:25.924968 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:25.925375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:30:25.936965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:30:25.951797 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:30:25.976808 systemd[1]: Reached target network.target - Network. Nov 1 00:30:25.985657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:30:26.000644 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:26.001272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:30:26.014722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:30:26.027232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:30:26.046111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:30:26.056163 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 00:30:26.073371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:30:26.073622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:26.073798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:26.075768 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:26.076011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:30:26.085072 systemd-networkd[1521]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 00:30:26.085190 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Nov 1 00:30:26.085518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:26.085703 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:30:26.086757 systemd-networkd[1521]: enp2s0f0np0: Link UP Nov 1 00:30:26.087147 systemd-networkd[1521]: enp2s0f0np0: Gained carrier Nov 1 00:30:26.107152 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 00:30:26.118837 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:26.118987 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:30:26.125666 systemd-networkd[1521]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:04:87:b4.network. Nov 1 00:30:26.125968 systemd-networkd[1521]: enp2s0f1np1: Link UP Nov 1 00:30:26.126304 systemd-networkd[1521]: enp2s0f1np1: Gained carrier Nov 1 00:30:26.133049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:26.133377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:30:26.139414 systemd-networkd[1521]: bond0: Link UP Nov 1 00:30:26.139816 systemd-networkd[1521]: bond0: Gained carrier Nov 1 00:30:26.147306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:30:26.157895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:30:26.168965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:30:26.181753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:30:26.192242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:30:26.192320 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:30:26.192371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:30:26.192964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:30:26.193037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:30:26.213432 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:30:26.213503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:30:26.217085 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 00:30:26.217110 kernel: bond0: active interface up! Nov 1 00:30:26.238391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:30:26.238460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:30:26.250377 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:30:26.250445 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:30:26.261043 systemd[1]: Finished ensure-sysext.service. Nov 1 00:30:26.270550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:30:26.270582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:30:26.280266 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:30:26.314605 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:30:26.325251 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:30:26.343306 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:30:26.349143 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 00:30:26.359146 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:30:26.370130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:30:26.381122 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:30:26.381137 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:30:26.389121 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:30:26.398190 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:30:26.408164 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:30:26.419122 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:30:26.427326 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:30:26.437815 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:30:26.447475 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:30:26.457402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:30:26.467173 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:30:26.477122 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:30:26.485141 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:30:26.485155 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:30:26.491165 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:30:26.501848 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:30:26.511687 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:30:26.520736 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:30:26.523695 coreos-metadata[1792]: Nov 01 00:30:26.523 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:26.530805 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:30:26.532078 dbus-daemon[1793]: [system] SELinux support is enabled Nov 1 00:30:26.532559 jq[1796]: false Nov 1 00:30:26.541132 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:30:26.541728 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:30:26.549819 extend-filesystems[1798]: Found loop4 Nov 1 00:30:26.549819 extend-filesystems[1798]: Found loop5 Nov 1 00:30:26.612213 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 00:30:26.612231 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1513) Nov 1 00:30:26.559276 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:30:26.612376 extend-filesystems[1798]: Found loop6 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found loop7 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sda Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb1 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb2 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb3 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found usr Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb4 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb6 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb7 Nov 1 00:30:26.612376 extend-filesystems[1798]: Found sdb9 Nov 1 00:30:26.612376 extend-filesystems[1798]: Checking size of /dev/sdb9 Nov 1 00:30:26.612376 extend-filesystems[1798]: Resized partition /dev/sdb9 Nov 1 00:30:26.599236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:30:26.761227 extend-filesystems[1806]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:30:26.631443 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:30:26.643537 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:30:26.649932 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 1 00:30:26.674523 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:30:26.779458 update_engine[1823]: I20251101 00:30:26.699721 1823 main.cc:92] Flatcar Update Engine starting Nov 1 00:30:26.779458 update_engine[1823]: I20251101 00:30:26.700642 1823 update_check_scheduler.cc:74] Next update check in 8m9s Nov 1 00:30:26.674945 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:30:26.780273 jq[1824]: true Nov 1 00:30:26.692839 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:30:26.702512 systemd-logind[1818]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 00:30:26.702523 systemd-logind[1818]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:30:26.702531 systemd-logind[1818]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 00:30:26.702687 systemd-logind[1818]: New seat seat0. Nov 1 00:30:26.726439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:30:26.747186 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:30:26.780399 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:30:26.780491 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:30:26.780682 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:30:26.780761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:30:26.790567 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:30:26.790650 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:30:26.791858 sshd_keygen[1822]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:30:26.804292 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:30:26.804392 (ntainerd)[1835]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:30:26.805926 jq[1834]: true Nov 1 00:30:26.817726 dbus-daemon[1793]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:30:26.819512 tar[1826]: linux-amd64/LICENSE Nov 1 00:30:26.819642 tar[1826]: linux-amd64/helm Nov 1 00:30:26.823307 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 00:30:26.823405 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 1 00:30:26.827156 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:30:26.837526 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:30:26.846214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:30:26.846316 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:30:26.857219 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:30:26.857323 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:30:26.873644 bash[1864]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:30:26.886257 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:30:26.899530 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:30:26.906794 locksmithd[1871]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:30:26.910443 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:30:26.910536 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:30:26.936386 systemd[1]: Starting sshkeys.service... Nov 1 00:30:26.944053 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:30:26.956376 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:30:26.967982 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:30:26.976127 containerd[1835]: time="2025-11-01T00:30:26.976072825Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:30:26.979550 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:30:26.988815 containerd[1835]: time="2025-11-01T00:30:26.988759725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.989622 containerd[1835]: time="2025-11-01T00:30:26.989597773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:26.989676 containerd[1835]: time="2025-11-01T00:30:26.989620925Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:30:26.989676 containerd[1835]: time="2025-11-01T00:30:26.989636428Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:30:26.989767 containerd[1835]: time="2025-11-01T00:30:26.989754323Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:30:26.989805 containerd[1835]: time="2025-11-01T00:30:26.989771053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.989833 containerd[1835]: time="2025-11-01T00:30:26.989818178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:26.989860 containerd[1835]: time="2025-11-01T00:30:26.989831014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.989980 containerd[1835]: time="2025-11-01T00:30:26.989965078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990011 containerd[1835]: time="2025-11-01T00:30:26.989979994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990011 containerd[1835]: time="2025-11-01T00:30:26.989991562Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990011 containerd[1835]: time="2025-11-01T00:30:26.990002348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990095 containerd[1835]: time="2025-11-01T00:30:26.990062328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990309 containerd[1835]: time="2025-11-01T00:30:26.990295472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990401 containerd[1835]: time="2025-11-01T00:30:26.990386189Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:30:26.990437 containerd[1835]: time="2025-11-01T00:30:26.990400510Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:30:26.990473 containerd[1835]: time="2025-11-01T00:30:26.990462903Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:30:26.990513 containerd[1835]: time="2025-11-01T00:30:26.990502527Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:30:26.991548 coreos-metadata[1891]: Nov 01 00:30:26.991 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 00:30:26.991836 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:30:27.001758 containerd[1835]: time="2025-11-01T00:30:27.001711048Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:30:27.001758 containerd[1835]: time="2025-11-01T00:30:27.001748905Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:30:27.001809 containerd[1835]: time="2025-11-01T00:30:27.001765519Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:30:27.001809 containerd[1835]: time="2025-11-01T00:30:27.001781968Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:30:27.001809 containerd[1835]: time="2025-11-01T00:30:27.001796835Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:30:27.001939 containerd[1835]: time="2025-11-01T00:30:27.001900188Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:30:27.001955 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 1 00:30:27.002146 containerd[1835]: time="2025-11-01T00:30:27.002106785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:30:27.002231 containerd[1835]: time="2025-11-01T00:30:27.002186431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:30:27.002231 containerd[1835]: time="2025-11-01T00:30:27.002201973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:30:27.002231 containerd[1835]: time="2025-11-01T00:30:27.002214482Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:30:27.002231 containerd[1835]: time="2025-11-01T00:30:27.002227236Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002239521Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002251537Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002264191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002276889Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002289151Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002304 containerd[1835]: time="2025-11-01T00:30:27.002300970Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002312820Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002330440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002345016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002356803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002369045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002383 containerd[1835]: time="2025-11-01T00:30:27.002380589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002392933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002403864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002415795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002427648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002442229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002464 containerd[1835]: time="2025-11-01T00:30:27.002453614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002465417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002477230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002492599Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002511455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002523246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002544 containerd[1835]: time="2025-11-01T00:30:27.002534041Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:30:27.002622 containerd[1835]: time="2025-11-01T00:30:27.002569821Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:30:27.002622 containerd[1835]: time="2025-11-01T00:30:27.002586116Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:30:27.002622 containerd[1835]: time="2025-11-01T00:30:27.002597156Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:30:27.002622 containerd[1835]: time="2025-11-01T00:30:27.002609025Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:30:27.002680 containerd[1835]: time="2025-11-01T00:30:27.002619760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002680 containerd[1835]: time="2025-11-01T00:30:27.002631979Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:30:27.002680 containerd[1835]: time="2025-11-01T00:30:27.002642104Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:30:27.002680 containerd[1835]: time="2025-11-01T00:30:27.002651771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:30:27.002983 containerd[1835]: time="2025-11-01T00:30:27.002902087Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:30:27.002983 containerd[1835]: time="2025-11-01T00:30:27.002959609Z" level=info msg="Connect containerd service" Nov 1 00:30:27.003080 containerd[1835]: time="2025-11-01T00:30:27.002985363Z" level=info msg="using legacy CRI server" Nov 1 00:30:27.003080 containerd[1835]: time="2025-11-01T00:30:27.002991799Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:30:27.003080 containerd[1835]: time="2025-11-01T00:30:27.003067017Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:30:27.003518 containerd[1835]: time="2025-11-01T00:30:27.003475049Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:30:27.003653 containerd[1835]: time="2025-11-01T00:30:27.003604702Z" level=info msg="Start subscribing containerd event" Nov 1 00:30:27.003653 containerd[1835]: time="2025-11-01T00:30:27.003634743Z" level=info msg="Start recovering state" Nov 1 00:30:27.003699 containerd[1835]: time="2025-11-01T00:30:27.003685990Z" level=info msg="Start event monitor" Nov 1 00:30:27.003699 containerd[1835]: time="2025-11-01T00:30:27.003690856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:30:27.003727 containerd[1835]: time="2025-11-01T00:30:27.003698538Z" level=info msg="Start snapshots syncer" Nov 1 00:30:27.003727 containerd[1835]: time="2025-11-01T00:30:27.003709870Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:30:27.003727 containerd[1835]: time="2025-11-01T00:30:27.003717863Z" level=info msg="Start streaming server" Nov 1 00:30:27.003769 containerd[1835]: time="2025-11-01T00:30:27.003727366Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:30:27.003769 containerd[1835]: time="2025-11-01T00:30:27.003763059Z" level=info msg="containerd successfully booted in 0.028363s" Nov 1 00:30:27.012337 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:30:27.020528 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:30:27.100447 tar[1826]: linux-amd64/README.md Nov 1 00:30:27.110098 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 00:30:27.136459 extend-filesystems[1806]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 00:30:27.136459 extend-filesystems[1806]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 00:30:27.136459 extend-filesystems[1806]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 00:30:27.177178 extend-filesystems[1798]: Resized filesystem in /dev/sdb9 Nov 1 00:30:27.136852 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:30:27.136965 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:30:27.185399 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:30:28.144187 systemd-networkd[1521]: bond0: Gained IPv6LL Nov 1 00:30:28.145437 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:30:28.156743 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:30:28.174326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:30:28.184829 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:30:28.202875 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:30:29.014174 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Nov 1 00:30:29.014334 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Nov 1 00:30:29.033600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:30:29.046745 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:30:29.463401 kubelet[1932]: E1101 00:30:29.463288 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:29.464364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:29.464444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:29.589650 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:30:29.608427 systemd[1]: Started sshd@0-139.178.94.145:22-139.178.89.65:53782.service - OpenSSH per-connection server daemon (139.178.89.65:53782). Nov 1 00:30:29.657409 sshd[1953]: Accepted publickey for core from 139.178.89.65 port 53782 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:29.658422 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:29.664046 systemd-logind[1818]: New session 1 of user core. Nov 1 00:30:29.664867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:30:29.684437 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:30:29.698149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:30:29.723806 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:30:29.750205 (systemd)[1957]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:30:29.858355 systemd[1957]: Queued start job for default target default.target. Nov 1 00:30:29.866778 systemd[1957]: Created slice app.slice - User Application Slice. Nov 1 00:30:29.866792 systemd[1957]: Reached target paths.target - Paths. Nov 1 00:30:29.866800 systemd[1957]: Reached target timers.target - Timers. Nov 1 00:30:29.867432 systemd[1957]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:30:29.872987 systemd[1957]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:30:29.873015 systemd[1957]: Reached target sockets.target - Sockets. Nov 1 00:30:29.873025 systemd[1957]: Reached target basic.target - Basic System. Nov 1 00:30:29.873046 systemd[1957]: Reached target default.target - Main User Target. Nov 1 00:30:29.873061 systemd[1957]: Startup finished in 107ms. Nov 1 00:30:29.873193 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:30:29.884258 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:30:29.952315 systemd[1]: Started sshd@1-139.178.94.145:22-139.178.89.65:53786.service - OpenSSH per-connection server daemon (139.178.89.65:53786). Nov 1 00:30:29.993264 sshd[1968]: Accepted publickey for core from 139.178.89.65 port 53786 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:29.993887 sshd[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:29.996246 systemd-logind[1818]: New session 2 of user core. Nov 1 00:30:30.008287 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:30:30.067601 sshd[1968]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:30.091344 systemd[1]: sshd@1-139.178.94.145:22-139.178.89.65:53786.service: Deactivated successfully. Nov 1 00:30:30.094705 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:30:30.097917 systemd-logind[1818]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:30:30.113457 systemd[1]: Started sshd@2-139.178.94.145:22-139.178.89.65:53798.service - OpenSSH per-connection server daemon (139.178.89.65:53798). Nov 1 00:30:30.124917 systemd-logind[1818]: Removed session 2. Nov 1 00:30:30.144054 sshd[1975]: Accepted publickey for core from 139.178.89.65 port 53798 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:30.145019 sshd[1975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:30.148370 systemd-logind[1818]: New session 3 of user core. Nov 1 00:30:30.161733 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:30:30.242395 sshd[1975]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:30.248762 systemd[1]: sshd@2-139.178.94.145:22-139.178.89.65:53798.service: Deactivated successfully. Nov 1 00:30:30.252465 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:30:30.255630 systemd-logind[1818]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:30:30.258373 systemd-logind[1818]: Removed session 3. Nov 1 00:30:31.614130 systemd-timesyncd[1784]: Contacted time server 66.85.78.80:123 (0.flatcar.pool.ntp.org). Nov 1 00:30:31.614285 systemd-timesyncd[1784]: Initial clock synchronization to Sat 2025-11-01 00:30:31.778319 UTC. Nov 1 00:30:32.070403 login[1903]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:30:32.071249 login[1905]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:30:32.073208 systemd-logind[1818]: New session 4 of user core. Nov 1 00:30:32.085558 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:30:32.087055 systemd-logind[1818]: New session 5 of user core. Nov 1 00:30:32.087821 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:30:36.230588 coreos-metadata[1891]: Nov 01 00:30:36.230 INFO Fetch successful Nov 1 00:30:36.268522 unknown[1891]: wrote ssh authorized keys file for user: core Nov 1 00:30:36.288487 update-ssh-keys[2005]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:30:36.288803 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:30:36.289551 systemd[1]: Finished sshkeys.service. Nov 1 00:30:39.462479 coreos-metadata[1792]: Nov 01 00:30:39.462 INFO Fetch successful Nov 1 00:30:39.510590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:30:39.513993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:30:39.517775 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:30:39.519038 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 1 00:30:39.781417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:30:39.783647 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:30:39.803676 kubelet[2023]: E1101 00:30:39.803622 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:39.805512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:39.805593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:40.368425 systemd[1]: Started sshd@3-139.178.94.145:22-139.178.89.65:48838.service - OpenSSH per-connection server daemon (139.178.89.65:48838). Nov 1 00:30:40.400366 sshd[2043]: Accepted publickey for core from 139.178.89.65 port 48838 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:40.401042 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:40.403670 systemd-logind[1818]: New session 6 of user core. Nov 1 00:30:40.421394 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:30:40.474335 sshd[2043]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:40.487785 systemd[1]: sshd@3-139.178.94.145:22-139.178.89.65:48838.service: Deactivated successfully. Nov 1 00:30:40.488563 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:30:40.489315 systemd-logind[1818]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:30:40.490104 systemd[1]: Started sshd@4-139.178.94.145:22-139.178.89.65:48848.service - OpenSSH per-connection server daemon (139.178.89.65:48848). Nov 1 00:30:40.490685 systemd-logind[1818]: Removed session 6. Nov 1 00:30:40.523359 sshd[2050]: Accepted publickey for core from 139.178.89.65 port 48848 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:40.524083 sshd[2050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:40.526834 systemd-logind[1818]: New session 7 of user core. Nov 1 00:30:40.538376 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:30:40.598462 sshd[2050]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:40.606292 systemd[1]: sshd@4-139.178.94.145:22-139.178.89.65:48848.service: Deactivated successfully. Nov 1 00:30:40.609481 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:30:40.609868 systemd-logind[1818]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:30:40.610545 systemd-logind[1818]: Removed session 7. Nov 1 00:30:40.822524 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 1 00:30:40.823903 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:30:40.824338 systemd[1]: Startup finished in 1.941s (kernel) + 25.999s (initrd) + 19.952s (userspace) = 47.893s. Nov 1 00:30:50.007216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:30:50.021367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:30:50.295379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:30:50.306338 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:30:50.339269 kubelet[2065]: E1101 00:30:50.339227 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:30:50.340356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:30:50.340431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:30:50.671405 systemd[1]: Started sshd@5-139.178.94.145:22-139.178.89.65:49346.service - OpenSSH per-connection server daemon (139.178.89.65:49346). Nov 1 00:30:50.700982 sshd[2082]: Accepted publickey for core from 139.178.89.65 port 49346 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:50.701709 sshd[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:50.704332 systemd-logind[1818]: New session 8 of user core. Nov 1 00:30:50.713403 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:30:50.764399 sshd[2082]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:50.779736 systemd[1]: sshd@5-139.178.94.145:22-139.178.89.65:49346.service: Deactivated successfully. Nov 1 00:30:50.780463 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:30:50.781126 systemd-logind[1818]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:30:50.781884 systemd[1]: Started sshd@6-139.178.94.145:22-139.178.89.65:49358.service - OpenSSH per-connection server daemon (139.178.89.65:49358). Nov 1 00:30:50.782441 systemd-logind[1818]: Removed session 8. Nov 1 00:30:50.814571 sshd[2089]: Accepted publickey for core from 139.178.89.65 port 49358 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:50.815282 sshd[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:50.817862 systemd-logind[1818]: New session 9 of user core. Nov 1 00:30:50.834318 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:30:50.884163 sshd[2089]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:50.894686 systemd[1]: sshd@6-139.178.94.145:22-139.178.89.65:49358.service: Deactivated successfully. Nov 1 00:30:50.895416 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:30:50.896044 systemd-logind[1818]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:30:50.896803 systemd[1]: Started sshd@7-139.178.94.145:22-139.178.89.65:49372.service - OpenSSH per-connection server daemon (139.178.89.65:49372). Nov 1 00:30:50.897264 systemd-logind[1818]: Removed session 9. Nov 1 00:30:50.941057 sshd[2096]: Accepted publickey for core from 139.178.89.65 port 49372 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:50.941990 sshd[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:50.945476 systemd-logind[1818]: New session 10 of user core. Nov 1 00:30:50.956351 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:30:51.020963 sshd[2096]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:51.038698 systemd[1]: sshd@7-139.178.94.145:22-139.178.89.65:49372.service: Deactivated successfully. Nov 1 00:30:51.042247 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:30:51.044291 systemd-logind[1818]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:30:51.059872 systemd[1]: Started sshd@8-139.178.94.145:22-139.178.89.65:49384.service - OpenSSH per-connection server daemon (139.178.89.65:49384). Nov 1 00:30:51.062626 systemd-logind[1818]: Removed session 10. Nov 1 00:30:51.121552 sshd[2103]: Accepted publickey for core from 139.178.89.65 port 49384 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:51.122355 sshd[2103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:51.125334 systemd-logind[1818]: New session 11 of user core. Nov 1 00:30:51.135338 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:30:51.192437 sudo[2106]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:30:51.192590 sudo[2106]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:30:51.207904 sudo[2106]: pam_unix(sudo:session): session closed for user root Nov 1 00:30:51.208949 sshd[2103]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:51.219996 systemd[1]: sshd@8-139.178.94.145:22-139.178.89.65:49384.service: Deactivated successfully. Nov 1 00:30:51.220902 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:30:51.221870 systemd-logind[1818]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:30:51.222814 systemd[1]: Started sshd@9-139.178.94.145:22-139.178.89.65:49390.service - OpenSSH per-connection server daemon (139.178.89.65:49390). Nov 1 00:30:51.223490 systemd-logind[1818]: Removed session 11. Nov 1 00:30:51.281349 sshd[2111]: Accepted publickey for core from 139.178.89.65 port 49390 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:51.282570 sshd[2111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:51.286741 systemd-logind[1818]: New session 12 of user core. Nov 1 00:30:51.303458 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:30:51.359596 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:30:51.359746 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:30:51.361870 sudo[2115]: pam_unix(sudo:session): session closed for user root Nov 1 00:30:51.364464 sudo[2114]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:30:51.364615 sudo[2114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:30:51.379427 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:30:51.380507 auditctl[2118]: No rules Nov 1 00:30:51.380724 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:30:51.380847 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:30:51.382395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:30:51.398270 augenrules[2136]: No rules Nov 1 00:30:51.398630 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:30:51.399227 sudo[2114]: pam_unix(sudo:session): session closed for user root Nov 1 00:30:51.400158 sshd[2111]: pam_unix(sshd:session): session closed for user core Nov 1 00:30:51.418068 systemd[1]: sshd@9-139.178.94.145:22-139.178.89.65:49390.service: Deactivated successfully. Nov 1 00:30:51.419018 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:30:51.419951 systemd-logind[1818]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:30:51.420911 systemd[1]: Started sshd@10-139.178.94.145:22-139.178.89.65:49396.service - OpenSSH per-connection server daemon (139.178.89.65:49396). Nov 1 00:30:51.421679 systemd-logind[1818]: Removed session 12. Nov 1 00:30:51.480282 sshd[2146]: Accepted publickey for core from 139.178.89.65 port 49396 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:30:51.481383 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:30:51.485440 systemd-logind[1818]: New session 13 of user core. Nov 1 00:30:51.500468 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:30:51.555816 sudo[2149]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:30:51.556031 sudo[2149]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:30:51.859459 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:30:51.859522 (dockerd)[2173]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:30:52.159472 dockerd[2173]: time="2025-11-01T00:30:52.159405783Z" level=info msg="Starting up" Nov 1 00:30:52.227841 dockerd[2173]: time="2025-11-01T00:30:52.227794926Z" level=info msg="Loading containers: start." Nov 1 00:30:52.310158 kernel: Initializing XFRM netlink socket Nov 1 00:30:52.375036 systemd-networkd[1521]: docker0: Link UP Nov 1 00:30:52.394973 dockerd[2173]: time="2025-11-01T00:30:52.394955673Z" level=info msg="Loading containers: done." Nov 1 00:30:52.404312 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck907344231-merged.mount: Deactivated successfully. Nov 1 00:30:52.404486 dockerd[2173]: time="2025-11-01T00:30:52.404322456Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:30:52.404486 dockerd[2173]: time="2025-11-01T00:30:52.404372503Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:30:52.404486 dockerd[2173]: time="2025-11-01T00:30:52.404423294Z" level=info msg="Daemon has completed initialization" Nov 1 00:30:52.417814 dockerd[2173]: time="2025-11-01T00:30:52.417769794Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:30:52.417822 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:30:53.128986 containerd[1835]: time="2025-11-01T00:30:53.128965293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:30:53.757362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866685662.mount: Deactivated successfully. Nov 1 00:30:54.463315 containerd[1835]: time="2025-11-01T00:30:54.463261688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:54.463545 containerd[1835]: time="2025-11-01T00:30:54.463438445Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 1 00:30:54.463874 containerd[1835]: time="2025-11-01T00:30:54.463863232Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:54.465942 containerd[1835]: time="2025-11-01T00:30:54.465898405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:54.466492 containerd[1835]: time="2025-11-01T00:30:54.466444014Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.337454737s" Nov 1 00:30:54.466492 containerd[1835]: time="2025-11-01T00:30:54.466468776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:30:54.466805 containerd[1835]: time="2025-11-01T00:30:54.466763740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:30:55.501319 containerd[1835]: time="2025-11-01T00:30:55.501266793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:55.501525 containerd[1835]: time="2025-11-01T00:30:55.501496609Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 1 00:30:55.502009 containerd[1835]: time="2025-11-01T00:30:55.501970223Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:55.503851 containerd[1835]: time="2025-11-01T00:30:55.503810156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:55.504411 containerd[1835]: time="2025-11-01T00:30:55.504372917Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.037594552s" Nov 1 00:30:55.504411 containerd[1835]: time="2025-11-01T00:30:55.504392167Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:30:55.504704 containerd[1835]: time="2025-11-01T00:30:55.504642412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:30:56.301918 containerd[1835]: time="2025-11-01T00:30:56.301861898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:56.301998 containerd[1835]: time="2025-11-01T00:30:56.301977741Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 1 00:30:56.302554 containerd[1835]: time="2025-11-01T00:30:56.302513538Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:56.304182 containerd[1835]: time="2025-11-01T00:30:56.304103018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:56.304831 containerd[1835]: time="2025-11-01T00:30:56.304810311Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 800.148937ms" Nov 1 00:30:56.304868 containerd[1835]: time="2025-11-01T00:30:56.304835740Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:30:56.305175 containerd[1835]: time="2025-11-01T00:30:56.305110045Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:30:57.086634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867934366.mount: Deactivated successfully. Nov 1 00:30:57.240120 containerd[1835]: time="2025-11-01T00:30:57.240062805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:57.240344 containerd[1835]: time="2025-11-01T00:30:57.240253621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 1 00:30:57.240529 containerd[1835]: time="2025-11-01T00:30:57.240487653Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:57.241512 containerd[1835]: time="2025-11-01T00:30:57.241469619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:57.241885 containerd[1835]: time="2025-11-01T00:30:57.241843361Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 936.702424ms" Nov 1 00:30:57.241885 containerd[1835]: time="2025-11-01T00:30:57.241858993Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:30:57.242143 containerd[1835]: time="2025-11-01T00:30:57.242097115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:30:57.685595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715345999.mount: Deactivated successfully. Nov 1 00:30:58.308401 containerd[1835]: time="2025-11-01T00:30:58.308344786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.308613 containerd[1835]: time="2025-11-01T00:30:58.308561455Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 1 00:30:58.309001 containerd[1835]: time="2025-11-01T00:30:58.308962331Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.310726 containerd[1835]: time="2025-11-01T00:30:58.310686714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.311421 containerd[1835]: time="2025-11-01T00:30:58.311377173Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.069265906s" Nov 1 00:30:58.311421 containerd[1835]: time="2025-11-01T00:30:58.311395994Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:30:58.311688 containerd[1835]: time="2025-11-01T00:30:58.311648936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:30:58.877143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184681580.mount: Deactivated successfully. Nov 1 00:30:58.878335 containerd[1835]: time="2025-11-01T00:30:58.878315853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.878577 containerd[1835]: time="2025-11-01T00:30:58.878551677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 1 00:30:58.879002 containerd[1835]: time="2025-11-01T00:30:58.878989118Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.880049 containerd[1835]: time="2025-11-01T00:30:58.880034925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:30:58.880556 containerd[1835]: time="2025-11-01T00:30:58.880541157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 568.874963ms" Nov 1 00:30:58.880635 containerd[1835]: time="2025-11-01T00:30:58.880557362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:30:58.880856 containerd[1835]: time="2025-11-01T00:30:58.880846624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:31:00.505807 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:31:00.518229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:31:00.741105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:00.743520 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:31:00.772039 kubelet[2520]: E1101 00:31:00.771964 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:31:00.773496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:31:00.773675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:31:00.832075 containerd[1835]: time="2025-11-01T00:31:00.832048703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:00.832298 containerd[1835]: time="2025-11-01T00:31:00.832273746Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 1 00:31:00.832694 containerd[1835]: time="2025-11-01T00:31:00.832682907Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:00.834461 containerd[1835]: time="2025-11-01T00:31:00.834422455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:00.835169 containerd[1835]: time="2025-11-01T00:31:00.835132668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 1.954271904s" Nov 1 00:31:00.835169 containerd[1835]: time="2025-11-01T00:31:00.835149505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:31:03.135827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:03.149490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:31:03.165667 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-13.scope)... Nov 1 00:31:03.165674 systemd[1]: Reloading... Nov 1 00:31:03.202173 zram_generator::config[2628]: No configuration found. Nov 1 00:31:03.267877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:31:03.328067 systemd[1]: Reloading finished in 162 ms. Nov 1 00:31:03.353895 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:31:03.353953 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:31:03.354077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:03.364542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:31:03.600579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:03.604086 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:31:03.624387 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:31:03.624387 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:31:03.624602 kubelet[2692]: I1101 00:31:03.624392 2692 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:31:03.867457 kubelet[2692]: I1101 00:31:03.867396 2692 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:31:03.867457 kubelet[2692]: I1101 00:31:03.867407 2692 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:31:03.868064 kubelet[2692]: I1101 00:31:03.868027 2692 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:31:03.868064 kubelet[2692]: I1101 00:31:03.868039 2692 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:31:03.868262 kubelet[2692]: I1101 00:31:03.868226 2692 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:31:03.872089 kubelet[2692]: E1101 00:31:03.872070 2692 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.94.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:31:03.872374 kubelet[2692]: I1101 00:31:03.872364 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:31:03.874320 kubelet[2692]: E1101 00:31:03.874309 2692 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:31:03.874352 kubelet[2692]: I1101 00:31:03.874331 2692 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:31:03.882759 kubelet[2692]: I1101 00:31:03.882751 2692 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:31:03.883911 kubelet[2692]: I1101 00:31:03.883867 2692 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:31:03.883969 kubelet[2692]: I1101 00:31:03.883883 2692 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d37906c143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:31:03.884029 kubelet[2692]: I1101 00:31:03.883973 2692 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:31:03.884029 kubelet[2692]: I1101 00:31:03.883978 2692 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:31:03.884060 kubelet[2692]: I1101 00:31:03.884030 2692 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:31:03.885130 kubelet[2692]: I1101 00:31:03.885074 2692 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:31:03.887631 kubelet[2692]: I1101 00:31:03.887596 2692 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:31:03.887631 kubelet[2692]: I1101 00:31:03.887605 2692 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:31:03.887631 kubelet[2692]: I1101 00:31:03.887631 2692 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:31:03.887708 kubelet[2692]: I1101 00:31:03.887637 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:31:03.889393 kubelet[2692]: E1101 00:31:03.889372 2692 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.94.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d37906c143&limit=500&resourceVersion=0\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:31:03.889446 kubelet[2692]: E1101 00:31:03.889394 2692 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.94.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:31:03.889446 kubelet[2692]: I1101 00:31:03.889423 2692 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:31:03.890073 kubelet[2692]: I1101 00:31:03.890063 2692 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:31:03.890125 kubelet[2692]: I1101 00:31:03.890084 2692 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:31:03.890146 kubelet[2692]: W1101 00:31:03.890132 2692 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:31:03.891402 kubelet[2692]: I1101 00:31:03.891348 2692 server.go:1262] "Started kubelet" Nov 1 00:31:03.891434 kubelet[2692]: I1101 00:31:03.891411 2692 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:31:03.891521 kubelet[2692]: I1101 00:31:03.891444 2692 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:31:03.891521 kubelet[2692]: I1101 00:31:03.891486 2692 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:31:03.891794 kubelet[2692]: I1101 00:31:03.891764 2692 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:31:03.891955 kubelet[2692]: I1101 00:31:03.891947 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:31:03.891983 kubelet[2692]: I1101 00:31:03.891955 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:31:03.892020 kubelet[2692]: I1101 00:31:03.891989 2692 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:31:03.892020 kubelet[2692]: E1101 00:31:03.891999 2692 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d37906c143\" not found" Nov 1 00:31:03.892020 kubelet[2692]: I1101 00:31:03.892016 2692 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:31:03.892116 kubelet[2692]: I1101 00:31:03.892049 2692 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:31:03.892204 kubelet[2692]: E1101 00:31:03.892185 2692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d37906c143?timeout=10s\": dial tcp 139.178.94.145:6443: connect: connection refused" interval="200ms" Nov 1 00:31:03.892265 kubelet[2692]: E1101 00:31:03.892249 2692 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.94.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:31:03.892430 kubelet[2692]: I1101 00:31:03.892423 2692 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:31:03.892480 kubelet[2692]: I1101 00:31:03.892469 2692 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:31:03.892683 kubelet[2692]: E1101 00:31:03.892671 2692 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:31:03.892915 kubelet[2692]: I1101 00:31:03.892907 2692 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:31:03.895663 kubelet[2692]: I1101 00:31:03.895645 2692 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:31:03.897941 kubelet[2692]: E1101 00:31:03.896956 2692 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.145:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.145:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-d37906c143.1873ba9d46bbb8c9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-d37906c143,UID:ci-4081.3.6-n-d37906c143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-d37906c143,},FirstTimestamp:2025-11-01 00:31:03.891335369 +0000 UTC m=+0.284892213,LastTimestamp:2025-11-01 00:31:03.891335369 +0000 UTC m=+0.284892213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-d37906c143,}" Nov 1 00:31:03.903770 kubelet[2692]: I1101 00:31:03.903754 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:31:03.904270 kubelet[2692]: I1101 00:31:03.904260 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:31:03.904270 kubelet[2692]: I1101 00:31:03.904270 2692 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:31:03.904330 kubelet[2692]: I1101 00:31:03.904283 2692 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:31:03.904330 kubelet[2692]: E1101 00:31:03.904303 2692 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:31:03.904556 kubelet[2692]: E1101 00:31:03.904510 2692 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.94.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:31:03.968181 kubelet[2692]: I1101 00:31:03.968147 2692 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:31:03.968181 kubelet[2692]: I1101 00:31:03.968166 2692 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:31:03.968181 kubelet[2692]: I1101 00:31:03.968181 2692 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:31:03.969195 kubelet[2692]: I1101 00:31:03.969182 2692 policy_none.go:49] "None policy: Start" Nov 1 00:31:03.969195 kubelet[2692]: I1101 00:31:03.969195 2692 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:31:03.969270 kubelet[2692]: I1101 00:31:03.969209 2692 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:31:03.970005 kubelet[2692]: I1101 00:31:03.969906 2692 policy_none.go:47] "Start" Nov 1 00:31:03.972094 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:31:03.982827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:31:03.984757 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:31:03.992287 kubelet[2692]: E1101 00:31:03.992273 2692 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d37906c143\" not found" Nov 1 00:31:04.001791 kubelet[2692]: E1101 00:31:04.001776 2692 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:31:04.001913 kubelet[2692]: I1101 00:31:04.001903 2692 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:31:04.001957 kubelet[2692]: I1101 00:31:04.001914 2692 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:31:04.002062 kubelet[2692]: I1101 00:31:04.002047 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:31:04.002518 kubelet[2692]: E1101 00:31:04.002464 2692 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:31:04.002574 kubelet[2692]: E1101 00:31:04.002525 2692 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-d37906c143\" not found" Nov 1 00:31:04.024100 systemd[1]: Created slice kubepods-burstable-pod150352b5b72dde4efcc6c1d86449ba28.slice - libcontainer container kubepods-burstable-pod150352b5b72dde4efcc6c1d86449ba28.slice. Nov 1 00:31:04.033575 kubelet[2692]: E1101 00:31:04.033528 2692 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d37906c143\" not found" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.035202 systemd[1]: Created slice kubepods-burstable-pod897f2ec3a9e6fa0cc50eafc22e53b763.slice - libcontainer container kubepods-burstable-pod897f2ec3a9e6fa0cc50eafc22e53b763.slice. Nov 1 00:31:04.051102 kubelet[2692]: E1101 00:31:04.051052 2692 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d37906c143\" not found" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.053158 systemd[1]: Created slice kubepods-burstable-podccbbf6dc93ccff2683f4b6e5e782998a.slice - libcontainer container kubepods-burstable-podccbbf6dc93ccff2683f4b6e5e782998a.slice. Nov 1 00:31:04.054538 kubelet[2692]: E1101 00:31:04.054498 2692 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d37906c143\" not found" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.093963 kubelet[2692]: E1101 00:31:04.093844 2692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d37906c143?timeout=10s\": dial tcp 139.178.94.145:6443: connect: connection refused" interval="400ms" Nov 1 00:31:04.106385 kubelet[2692]: I1101 00:31:04.106294 2692 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.107021 kubelet[2692]: E1101 00:31:04.106925 2692 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.145:6443/api/v1/nodes\": dial tcp 139.178.94.145:6443: connect: connection refused" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193249 kubelet[2692]: I1101 00:31:04.193130 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193249 kubelet[2692]: I1101 00:31:04.193169 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193249 kubelet[2692]: I1101 00:31:04.193190 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/897f2ec3a9e6fa0cc50eafc22e53b763-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d37906c143\" (UID: \"897f2ec3a9e6fa0cc50eafc22e53b763\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193249 kubelet[2692]: I1101 00:31:04.193206 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193435 kubelet[2692]: I1101 00:31:04.193250 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193435 kubelet[2692]: I1101 00:31:04.193293 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193435 kubelet[2692]: I1101 00:31:04.193329 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193435 kubelet[2692]: I1101 00:31:04.193348 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.193435 kubelet[2692]: I1101 00:31:04.193376 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.311179 kubelet[2692]: I1101 00:31:04.311078 2692 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.311912 kubelet[2692]: E1101 00:31:04.311820 2692 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.145:6443/api/v1/nodes\": dial tcp 139.178.94.145:6443: connect: connection refused" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.337148 containerd[1835]: time="2025-11-01T00:31:04.337088265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d37906c143,Uid:150352b5b72dde4efcc6c1d86449ba28,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:04.352376 containerd[1835]: time="2025-11-01T00:31:04.352332265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d37906c143,Uid:897f2ec3a9e6fa0cc50eafc22e53b763,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:04.356323 containerd[1835]: time="2025-11-01T00:31:04.356271822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d37906c143,Uid:ccbbf6dc93ccff2683f4b6e5e782998a,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:04.495265 kubelet[2692]: E1101 00:31:04.495039 2692 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d37906c143?timeout=10s\": dial tcp 139.178.94.145:6443: connect: connection refused" interval="800ms" Nov 1 00:31:04.714100 kubelet[2692]: I1101 00:31:04.714048 2692 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.714349 kubelet[2692]: E1101 00:31:04.714261 2692 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.145:6443/api/v1/nodes\": dial tcp 139.178.94.145:6443: connect: connection refused" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:04.800410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665338004.mount: Deactivated successfully. Nov 1 00:31:04.819841 containerd[1835]: time="2025-11-01T00:31:04.819785355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:31:04.820052 containerd[1835]: time="2025-11-01T00:31:04.820012056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:31:04.820762 containerd[1835]: time="2025-11-01T00:31:04.820720041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:31:04.821209 containerd[1835]: time="2025-11-01T00:31:04.821119311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:31:04.821209 containerd[1835]: time="2025-11-01T00:31:04.821130949Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:31:04.821764 containerd[1835]: time="2025-11-01T00:31:04.821722141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:31:04.821994 containerd[1835]: time="2025-11-01T00:31:04.821958891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:31:04.823682 containerd[1835]: time="2025-11-01T00:31:04.823639573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.276979ms" Nov 1 00:31:04.824279 containerd[1835]: time="2025-11-01T00:31:04.824237221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:31:04.824701 containerd[1835]: time="2025-11-01T00:31:04.824660980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 468.357944ms" Nov 1 00:31:04.825727 containerd[1835]: time="2025-11-01T00:31:04.825686164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.501522ms" Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919302680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919335520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919343171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919311271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919340429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:04.919351 containerd[1835]: time="2025-11-01T00:31:04.919347737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919344408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919366914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919373671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919388362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919387073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.919552 containerd[1835]: time="2025-11-01T00:31:04.919411370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:04.939434 systemd[1]: Started cri-containerd-773d69550fd26dc8d82249a4f8aed4334ab9bb257b04b95611f2edcf10c7b8ed.scope - libcontainer container 773d69550fd26dc8d82249a4f8aed4334ab9bb257b04b95611f2edcf10c7b8ed. Nov 1 00:31:04.940230 systemd[1]: Started cri-containerd-95711c2bd28a6b0187171084956b943bb55021e692ff47cd24b53a096e3658e3.scope - libcontainer container 95711c2bd28a6b0187171084956b943bb55021e692ff47cd24b53a096e3658e3. Nov 1 00:31:04.941021 systemd[1]: Started cri-containerd-f23cd3c9d7a4b383dd1227e934fdb35fedfd4623267a9593c46ba9adb4f16a13.scope - libcontainer container f23cd3c9d7a4b383dd1227e934fdb35fedfd4623267a9593c46ba9adb4f16a13. Nov 1 00:31:04.960880 kubelet[2692]: E1101 00:31:04.960855 2692 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.94.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:31:04.961876 containerd[1835]: time="2025-11-01T00:31:04.961855781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d37906c143,Uid:897f2ec3a9e6fa0cc50eafc22e53b763,Namespace:kube-system,Attempt:0,} returns sandbox id \"773d69550fd26dc8d82249a4f8aed4334ab9bb257b04b95611f2edcf10c7b8ed\"" Nov 1 00:31:04.962067 containerd[1835]: time="2025-11-01T00:31:04.962054768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d37906c143,Uid:ccbbf6dc93ccff2683f4b6e5e782998a,Namespace:kube-system,Attempt:0,} returns sandbox id \"95711c2bd28a6b0187171084956b943bb55021e692ff47cd24b53a096e3658e3\"" Nov 1 00:31:04.963331 containerd[1835]: time="2025-11-01T00:31:04.963318793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d37906c143,Uid:150352b5b72dde4efcc6c1d86449ba28,Namespace:kube-system,Attempt:0,} returns sandbox id \"f23cd3c9d7a4b383dd1227e934fdb35fedfd4623267a9593c46ba9adb4f16a13\"" Nov 1 00:31:04.965089 containerd[1835]: time="2025-11-01T00:31:04.965071561Z" level=info msg="CreateContainer within sandbox \"773d69550fd26dc8d82249a4f8aed4334ab9bb257b04b95611f2edcf10c7b8ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:31:04.965368 containerd[1835]: time="2025-11-01T00:31:04.965357435Z" level=info msg="CreateContainer within sandbox \"95711c2bd28a6b0187171084956b943bb55021e692ff47cd24b53a096e3658e3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:31:04.965845 containerd[1835]: time="2025-11-01T00:31:04.965822489Z" level=info msg="CreateContainer within sandbox \"f23cd3c9d7a4b383dd1227e934fdb35fedfd4623267a9593c46ba9adb4f16a13\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:31:04.971582 containerd[1835]: time="2025-11-01T00:31:04.971533270Z" level=info msg="CreateContainer within sandbox \"95711c2bd28a6b0187171084956b943bb55021e692ff47cd24b53a096e3658e3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e963c8d89bc6e4db77331e3b7d5f9fb069dffb27ccc0cfdd4aa0c7f28cb12f50\"" Nov 1 00:31:04.971820 containerd[1835]: time="2025-11-01T00:31:04.971805256Z" level=info msg="StartContainer for \"e963c8d89bc6e4db77331e3b7d5f9fb069dffb27ccc0cfdd4aa0c7f28cb12f50\"" Nov 1 00:31:04.972380 containerd[1835]: time="2025-11-01T00:31:04.972366878Z" level=info msg="CreateContainer within sandbox \"773d69550fd26dc8d82249a4f8aed4334ab9bb257b04b95611f2edcf10c7b8ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b8010e0edfe098f0efbfdb20839e76ff6ce99ebe49f380852d7306c1da6e058b\"" Nov 1 00:31:04.972594 containerd[1835]: time="2025-11-01T00:31:04.972581379Z" level=info msg="StartContainer for \"b8010e0edfe098f0efbfdb20839e76ff6ce99ebe49f380852d7306c1da6e058b\"" Nov 1 00:31:04.973681 containerd[1835]: time="2025-11-01T00:31:04.973666708Z" level=info msg="CreateContainer within sandbox \"f23cd3c9d7a4b383dd1227e934fdb35fedfd4623267a9593c46ba9adb4f16a13\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f5be26d8a76ba7230925c5b300ae81c3e214a871ede56286a091e32fd08354a\"" Nov 1 00:31:04.973872 containerd[1835]: time="2025-11-01T00:31:04.973858600Z" level=info msg="StartContainer for \"4f5be26d8a76ba7230925c5b300ae81c3e214a871ede56286a091e32fd08354a\"" Nov 1 00:31:04.996416 systemd[1]: Started cri-containerd-4f5be26d8a76ba7230925c5b300ae81c3e214a871ede56286a091e32fd08354a.scope - libcontainer container 4f5be26d8a76ba7230925c5b300ae81c3e214a871ede56286a091e32fd08354a. Nov 1 00:31:04.997154 systemd[1]: Started cri-containerd-b8010e0edfe098f0efbfdb20839e76ff6ce99ebe49f380852d7306c1da6e058b.scope - libcontainer container b8010e0edfe098f0efbfdb20839e76ff6ce99ebe49f380852d7306c1da6e058b. Nov 1 00:31:04.997894 systemd[1]: Started cri-containerd-e963c8d89bc6e4db77331e3b7d5f9fb069dffb27ccc0cfdd4aa0c7f28cb12f50.scope - libcontainer container e963c8d89bc6e4db77331e3b7d5f9fb069dffb27ccc0cfdd4aa0c7f28cb12f50. Nov 1 00:31:05.025892 containerd[1835]: time="2025-11-01T00:31:05.025868585Z" level=info msg="StartContainer for \"b8010e0edfe098f0efbfdb20839e76ff6ce99ebe49f380852d7306c1da6e058b\" returns successfully" Nov 1 00:31:05.025892 containerd[1835]: time="2025-11-01T00:31:05.025892025Z" level=info msg="StartContainer for \"e963c8d89bc6e4db77331e3b7d5f9fb069dffb27ccc0cfdd4aa0c7f28cb12f50\" returns successfully" Nov 1 00:31:05.026012 containerd[1835]: time="2025-11-01T00:31:05.025868603Z" level=info msg="StartContainer for \"4f5be26d8a76ba7230925c5b300ae81c3e214a871ede56286a091e32fd08354a\" returns successfully" Nov 1 00:31:05.515746 kubelet[2692]: I1101 00:31:05.515728 2692 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.752902 kubelet[2692]: E1101 00:31:05.752880 2692 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-d37906c143\" not found" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.854554 kubelet[2692]: I1101 00:31:05.854467 2692 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.889172 kubelet[2692]: I1101 00:31:05.889121 2692 apiserver.go:52] "Watching apiserver" Nov 1 00:31:05.892794 kubelet[2692]: I1101 00:31:05.892755 2692 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:31:05.892794 kubelet[2692]: I1101 00:31:05.892781 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.895312 kubelet[2692]: E1101 00:31:05.895270 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.895312 kubelet[2692]: I1101 00:31:05.895284 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.900790 kubelet[2692]: E1101 00:31:05.900643 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.900790 kubelet[2692]: I1101 00:31:05.900683 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.902595 kubelet[2692]: E1101 00:31:05.902575 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.907236 kubelet[2692]: I1101 00:31:05.907225 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.907974 kubelet[2692]: I1101 00:31:05.907962 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.908447 kubelet[2692]: E1101 00:31:05.908435 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.908592 kubelet[2692]: I1101 00:31:05.908585 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.908816 kubelet[2692]: E1101 00:31:05.908803 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:05.909373 kubelet[2692]: E1101 00:31:05.909360 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d37906c143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:06.911119 kubelet[2692]: I1101 00:31:06.911038 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:06.911891 kubelet[2692]: I1101 00:31:06.911324 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:06.917318 kubelet[2692]: I1101 00:31:06.917262 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:06.918051 kubelet[2692]: I1101 00:31:06.917984 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:08.073769 systemd[1]: Reloading requested from client PID 3019 ('systemctl') (unit session-13.scope)... Nov 1 00:31:08.073777 systemd[1]: Reloading... Nov 1 00:31:08.123157 zram_generator::config[3058]: No configuration found. Nov 1 00:31:08.193489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:31:08.261997 systemd[1]: Reloading finished in 188 ms. Nov 1 00:31:08.289314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:31:08.289409 kubelet[2692]: I1101 00:31:08.289333 2692 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:31:08.301842 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:31:08.301948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:08.318568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:31:08.569129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:31:08.575523 (kubelet)[3123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:31:08.595487 kubelet[3123]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:31:08.595487 kubelet[3123]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:31:08.595704 kubelet[3123]: I1101 00:31:08.595524 3123 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:31:08.598779 kubelet[3123]: I1101 00:31:08.598742 3123 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:31:08.598779 kubelet[3123]: I1101 00:31:08.598754 3123 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:31:08.598779 kubelet[3123]: I1101 00:31:08.598767 3123 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:31:08.598779 kubelet[3123]: I1101 00:31:08.598770 3123 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:31:08.598880 kubelet[3123]: I1101 00:31:08.598875 3123 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:31:08.599524 kubelet[3123]: I1101 00:31:08.599493 3123 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:31:08.601083 kubelet[3123]: I1101 00:31:08.601073 3123 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:31:08.602257 kubelet[3123]: E1101 00:31:08.602244 3123 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:31:08.602291 kubelet[3123]: I1101 00:31:08.602271 3123 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:31:08.609077 kubelet[3123]: I1101 00:31:08.609042 3123 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:31:08.609191 kubelet[3123]: I1101 00:31:08.609150 3123 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:31:08.609285 kubelet[3123]: I1101 00:31:08.609164 3123 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d37906c143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:31:08.609285 kubelet[3123]: I1101 00:31:08.609261 3123 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:31:08.609285 kubelet[3123]: I1101 00:31:08.609266 3123 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:31:08.609285 kubelet[3123]: I1101 00:31:08.609281 3123 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:31:08.609648 kubelet[3123]: I1101 00:31:08.609609 3123 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:31:08.609742 kubelet[3123]: I1101 00:31:08.609706 3123 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:31:08.609742 kubelet[3123]: I1101 00:31:08.609713 3123 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:31:08.609742 kubelet[3123]: I1101 00:31:08.609725 3123 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:31:08.609742 kubelet[3123]: I1101 00:31:08.609736 3123 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:31:08.610171 kubelet[3123]: I1101 00:31:08.610159 3123 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:31:08.610455 kubelet[3123]: I1101 00:31:08.610448 3123 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:31:08.610477 kubelet[3123]: I1101 00:31:08.610466 3123 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:31:08.611954 kubelet[3123]: I1101 00:31:08.611943 3123 server.go:1262] "Started kubelet" Nov 1 00:31:08.612017 kubelet[3123]: I1101 00:31:08.611987 3123 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:31:08.612051 kubelet[3123]: I1101 00:31:08.612004 3123 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:31:08.612089 kubelet[3123]: I1101 00:31:08.612055 3123 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:31:08.612229 kubelet[3123]: I1101 00:31:08.612217 3123 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:31:08.612418 kubelet[3123]: I1101 00:31:08.612406 3123 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:31:08.612462 kubelet[3123]: I1101 00:31:08.612421 3123 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:31:08.612679 kubelet[3123]: E1101 00:31:08.612535 3123 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d37906c143\" not found" Nov 1 00:31:08.612679 kubelet[3123]: I1101 00:31:08.612537 3123 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:31:08.612679 kubelet[3123]: I1101 00:31:08.612547 3123 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:31:08.612679 kubelet[3123]: I1101 00:31:08.612664 3123 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:31:08.612921 kubelet[3123]: I1101 00:31:08.612910 3123 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:31:08.612963 kubelet[3123]: I1101 00:31:08.612955 3123 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:31:08.612995 kubelet[3123]: I1101 00:31:08.612974 3123 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:31:08.613417 kubelet[3123]: E1101 00:31:08.613404 3123 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:31:08.613469 kubelet[3123]: I1101 00:31:08.613454 3123 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:31:08.619447 kubelet[3123]: I1101 00:31:08.619425 3123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:31:08.619933 kubelet[3123]: I1101 00:31:08.619924 3123 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:31:08.619933 kubelet[3123]: I1101 00:31:08.619933 3123 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:31:08.620006 kubelet[3123]: I1101 00:31:08.619947 3123 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:31:08.620006 kubelet[3123]: E1101 00:31:08.619977 3123 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:31:08.627996 kubelet[3123]: I1101 00:31:08.627982 3123 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:31:08.627996 kubelet[3123]: I1101 00:31:08.627992 3123 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:31:08.628085 kubelet[3123]: I1101 00:31:08.628003 3123 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:31:08.628085 kubelet[3123]: I1101 00:31:08.628080 3123 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:31:08.628135 kubelet[3123]: I1101 00:31:08.628086 3123 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:31:08.628135 kubelet[3123]: I1101 00:31:08.628106 3123 policy_none.go:49] "None policy: Start" Nov 1 00:31:08.628135 kubelet[3123]: I1101 00:31:08.628113 3123 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:31:08.628135 kubelet[3123]: I1101 00:31:08.628121 3123 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:31:08.628251 kubelet[3123]: I1101 00:31:08.628182 3123 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:31:08.628251 kubelet[3123]: I1101 00:31:08.628188 3123 policy_none.go:47] "Start" Nov 1 00:31:08.630014 kubelet[3123]: E1101 00:31:08.629977 3123 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:31:08.630081 kubelet[3123]: I1101 00:31:08.630075 3123 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:31:08.630125 kubelet[3123]: I1101 00:31:08.630082 3123 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:31:08.630169 kubelet[3123]: I1101 00:31:08.630161 3123 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:31:08.630467 kubelet[3123]: E1101 00:31:08.630432 3123 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:31:08.722216 kubelet[3123]: I1101 00:31:08.722079 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.722441 kubelet[3123]: I1101 00:31:08.722298 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.722538 kubelet[3123]: I1101 00:31:08.722494 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.730307 kubelet[3123]: I1101 00:31:08.730223 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:08.730591 kubelet[3123]: E1101 00:31:08.730382 3123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d37906c143\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.730591 kubelet[3123]: I1101 00:31:08.730441 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:08.732073 kubelet[3123]: I1101 00:31:08.731997 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:08.732356 kubelet[3123]: E1101 00:31:08.732247 3123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d37906c143\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.737166 kubelet[3123]: I1101 00:31:08.737082 3123 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.761206 kubelet[3123]: I1101 00:31:08.761077 3123 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.761413 kubelet[3123]: I1101 00:31:08.761294 3123 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914518 kubelet[3123]: I1101 00:31:08.914273 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914518 kubelet[3123]: I1101 00:31:08.914382 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914518 kubelet[3123]: I1101 00:31:08.914443 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914518 kubelet[3123]: I1101 00:31:08.914512 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914991 kubelet[3123]: I1101 00:31:08.914564 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914991 kubelet[3123]: I1101 00:31:08.914614 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccbbf6dc93ccff2683f4b6e5e782998a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d37906c143\" (UID: \"ccbbf6dc93ccff2683f4b6e5e782998a\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914991 kubelet[3123]: I1101 00:31:08.914699 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914991 kubelet[3123]: I1101 00:31:08.914750 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/150352b5b72dde4efcc6c1d86449ba28-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" (UID: \"150352b5b72dde4efcc6c1d86449ba28\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:08.914991 kubelet[3123]: I1101 00:31:08.914859 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/897f2ec3a9e6fa0cc50eafc22e53b763-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d37906c143\" (UID: \"897f2ec3a9e6fa0cc50eafc22e53b763\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.610539 kubelet[3123]: I1101 00:31:09.610476 3123 apiserver.go:52] "Watching apiserver" Nov 1 00:31:09.626171 kubelet[3123]: I1101 00:31:09.626088 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.626566 kubelet[3123]: I1101 00:31:09.626514 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.626776 kubelet[3123]: I1101 00:31:09.626580 3123 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.633243 kubelet[3123]: I1101 00:31:09.633167 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:09.633243 kubelet[3123]: E1101 00:31:09.633192 3123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d37906c143\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.633334 kubelet[3123]: I1101 00:31:09.633247 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:09.633334 kubelet[3123]: E1101 00:31:09.633283 3123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d37906c143\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.633334 kubelet[3123]: I1101 00:31:09.633285 3123 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:31:09.633334 kubelet[3123]: E1101 00:31:09.633310 3123 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d37906c143\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" Nov 1 00:31:09.645255 kubelet[3123]: I1101 00:31:09.645218 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d37906c143" podStartSLOduration=3.645190847 podStartE2EDuration="3.645190847s" podCreationTimestamp="2025-11-01 00:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:09.644973471 +0000 UTC m=+1.067205955" watchObservedRunningTime="2025-11-01 00:31:09.645190847 +0000 UTC m=+1.067423331" Nov 1 00:31:09.650059 kubelet[3123]: I1101 00:31:09.650028 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d37906c143" podStartSLOduration=1.650018089 podStartE2EDuration="1.650018089s" podCreationTimestamp="2025-11-01 00:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:09.649969308 +0000 UTC m=+1.072201796" watchObservedRunningTime="2025-11-01 00:31:09.650018089 +0000 UTC m=+1.072250569" Nov 1 00:31:09.653370 kubelet[3123]: I1101 00:31:09.653347 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d37906c143" podStartSLOduration=3.653338375 podStartE2EDuration="3.653338375s" podCreationTimestamp="2025-11-01 00:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:09.653285778 +0000 UTC m=+1.075518262" watchObservedRunningTime="2025-11-01 00:31:09.653338375 +0000 UTC m=+1.075570856" Nov 1 00:31:09.712948 kubelet[3123]: I1101 00:31:09.712929 3123 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:31:11.935368 update_engine[1823]: I20251101 00:31:11.935235 1823 update_attempter.cc:509] Updating boot flags... Nov 1 00:31:11.977122 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3217) Nov 1 00:31:12.004131 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3213) Nov 1 00:31:13.186351 kubelet[3123]: I1101 00:31:13.186271 3123 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:31:13.187563 kubelet[3123]: I1101 00:31:13.187475 3123 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:31:13.187761 containerd[1835]: time="2025-11-01T00:31:13.186984028Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:31:14.324081 systemd[1]: Created slice kubepods-besteffort-podc08078fe_0c91_4b8d_9471_54af988720cc.slice - libcontainer container kubepods-besteffort-podc08078fe_0c91_4b8d_9471_54af988720cc.slice. Nov 1 00:31:14.354745 kubelet[3123]: I1101 00:31:14.354723 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c08078fe-0c91-4b8d-9471-54af988720cc-kube-proxy\") pod \"kube-proxy-rvjmc\" (UID: \"c08078fe-0c91-4b8d-9471-54af988720cc\") " pod="kube-system/kube-proxy-rvjmc" Nov 1 00:31:14.355059 kubelet[3123]: I1101 00:31:14.354750 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c08078fe-0c91-4b8d-9471-54af988720cc-xtables-lock\") pod \"kube-proxy-rvjmc\" (UID: \"c08078fe-0c91-4b8d-9471-54af988720cc\") " pod="kube-system/kube-proxy-rvjmc" Nov 1 00:31:14.355059 kubelet[3123]: I1101 00:31:14.354767 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c08078fe-0c91-4b8d-9471-54af988720cc-lib-modules\") pod \"kube-proxy-rvjmc\" (UID: \"c08078fe-0c91-4b8d-9471-54af988720cc\") " pod="kube-system/kube-proxy-rvjmc" Nov 1 00:31:14.355059 kubelet[3123]: I1101 00:31:14.354783 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6k29\" (UniqueName: \"kubernetes.io/projected/c08078fe-0c91-4b8d-9471-54af988720cc-kube-api-access-h6k29\") pod \"kube-proxy-rvjmc\" (UID: \"c08078fe-0c91-4b8d-9471-54af988720cc\") " pod="kube-system/kube-proxy-rvjmc" Nov 1 00:31:14.409127 systemd[1]: Created slice kubepods-besteffort-podf5022b4c_0ba1_4c4f_879e_852edb239a07.slice - libcontainer container kubepods-besteffort-podf5022b4c_0ba1_4c4f_879e_852edb239a07.slice. Nov 1 00:31:14.455486 kubelet[3123]: I1101 00:31:14.455415 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chmk7\" (UniqueName: \"kubernetes.io/projected/f5022b4c-0ba1-4c4f-879e-852edb239a07-kube-api-access-chmk7\") pod \"tigera-operator-65cdcdfd6d-7dqhk\" (UID: \"f5022b4c-0ba1-4c4f-879e-852edb239a07\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7dqhk" Nov 1 00:31:14.455677 kubelet[3123]: I1101 00:31:14.455602 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5022b4c-0ba1-4c4f-879e-852edb239a07-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-7dqhk\" (UID: \"f5022b4c-0ba1-4c4f-879e-852edb239a07\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7dqhk" Nov 1 00:31:14.721177 containerd[1835]: time="2025-11-01T00:31:14.720918868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvjmc,Uid:c08078fe-0c91-4b8d-9471-54af988720cc,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:14.745886 containerd[1835]: time="2025-11-01T00:31:14.745811453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7dqhk,Uid:f5022b4c-0ba1-4c4f-879e-852edb239a07,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:31:15.082829 containerd[1835]: time="2025-11-01T00:31:15.082759570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:15.082829 containerd[1835]: time="2025-11-01T00:31:15.082785700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:15.082829 containerd[1835]: time="2025-11-01T00:31:15.082792815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:15.082940 containerd[1835]: time="2025-11-01T00:31:15.082832771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:15.102368 systemd[1]: Started cri-containerd-b3c89179252b40cea45a11a4d7dde11f3a7e968c36bcb4526185a71b2cf07649.scope - libcontainer container b3c89179252b40cea45a11a4d7dde11f3a7e968c36bcb4526185a71b2cf07649. Nov 1 00:31:15.116902 containerd[1835]: time="2025-11-01T00:31:15.116851375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rvjmc,Uid:c08078fe-0c91-4b8d-9471-54af988720cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3c89179252b40cea45a11a4d7dde11f3a7e968c36bcb4526185a71b2cf07649\"" Nov 1 00:31:15.170845 containerd[1835]: time="2025-11-01T00:31:15.170776195Z" level=info msg="CreateContainer within sandbox \"b3c89179252b40cea45a11a4d7dde11f3a7e968c36bcb4526185a71b2cf07649\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:31:15.221297 containerd[1835]: time="2025-11-01T00:31:15.221188209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:15.221297 containerd[1835]: time="2025-11-01T00:31:15.221246582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:15.221297 containerd[1835]: time="2025-11-01T00:31:15.221261994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:15.221489 containerd[1835]: time="2025-11-01T00:31:15.221345581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:15.248598 systemd[1]: Started cri-containerd-ab670262a95e2abc2654f85796498c89892c30c838fa45bef32cb3bb6bd85617.scope - libcontainer container ab670262a95e2abc2654f85796498c89892c30c838fa45bef32cb3bb6bd85617. Nov 1 00:31:15.317600 containerd[1835]: time="2025-11-01T00:31:15.317547820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7dqhk,Uid:f5022b4c-0ba1-4c4f-879e-852edb239a07,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab670262a95e2abc2654f85796498c89892c30c838fa45bef32cb3bb6bd85617\"" Nov 1 00:31:15.318556 containerd[1835]: time="2025-11-01T00:31:15.318536415Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:31:15.562006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171860431.mount: Deactivated successfully. Nov 1 00:31:15.624744 containerd[1835]: time="2025-11-01T00:31:15.624693518Z" level=info msg="CreateContainer within sandbox \"b3c89179252b40cea45a11a4d7dde11f3a7e968c36bcb4526185a71b2cf07649\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c54eacd1808955c106756ca7d4c9e250543ab7aef47d3b86ffa89d441d14e5b4\"" Nov 1 00:31:15.625110 containerd[1835]: time="2025-11-01T00:31:15.625095498Z" level=info msg="StartContainer for \"c54eacd1808955c106756ca7d4c9e250543ab7aef47d3b86ffa89d441d14e5b4\"" Nov 1 00:31:15.643271 systemd[1]: Started cri-containerd-c54eacd1808955c106756ca7d4c9e250543ab7aef47d3b86ffa89d441d14e5b4.scope - libcontainer container c54eacd1808955c106756ca7d4c9e250543ab7aef47d3b86ffa89d441d14e5b4. Nov 1 00:31:15.657758 containerd[1835]: time="2025-11-01T00:31:15.657702585Z" level=info msg="StartContainer for \"c54eacd1808955c106756ca7d4c9e250543ab7aef47d3b86ffa89d441d14e5b4\" returns successfully" Nov 1 00:31:16.658950 kubelet[3123]: I1101 00:31:16.658904 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rvjmc" podStartSLOduration=2.65887931 podStartE2EDuration="2.65887931s" podCreationTimestamp="2025-11-01 00:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:16.658650179 +0000 UTC m=+8.080882664" watchObservedRunningTime="2025-11-01 00:31:16.65887931 +0000 UTC m=+8.081111790" Nov 1 00:31:17.105287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854627176.mount: Deactivated successfully. Nov 1 00:31:17.348558 containerd[1835]: time="2025-11-01T00:31:17.348504332Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:17.348790 containerd[1835]: time="2025-11-01T00:31:17.348729009Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:31:17.349138 containerd[1835]: time="2025-11-01T00:31:17.349108875Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:17.350125 containerd[1835]: time="2025-11-01T00:31:17.350078571Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:17.350576 containerd[1835]: time="2025-11-01T00:31:17.350533627Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.031974796s" Nov 1 00:31:17.350576 containerd[1835]: time="2025-11-01T00:31:17.350551815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:31:17.352151 containerd[1835]: time="2025-11-01T00:31:17.352136273Z" level=info msg="CreateContainer within sandbox \"ab670262a95e2abc2654f85796498c89892c30c838fa45bef32cb3bb6bd85617\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:31:17.356213 containerd[1835]: time="2025-11-01T00:31:17.356171323Z" level=info msg="CreateContainer within sandbox \"ab670262a95e2abc2654f85796498c89892c30c838fa45bef32cb3bb6bd85617\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63aa8be235f17709622be2102aab09990d1ec7749fb136d4320fcc2d67823289\"" Nov 1 00:31:17.356435 containerd[1835]: time="2025-11-01T00:31:17.356421564Z" level=info msg="StartContainer for \"63aa8be235f17709622be2102aab09990d1ec7749fb136d4320fcc2d67823289\"" Nov 1 00:31:17.357045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2600889550.mount: Deactivated successfully. Nov 1 00:31:17.380256 systemd[1]: Started cri-containerd-63aa8be235f17709622be2102aab09990d1ec7749fb136d4320fcc2d67823289.scope - libcontainer container 63aa8be235f17709622be2102aab09990d1ec7749fb136d4320fcc2d67823289. Nov 1 00:31:17.392541 containerd[1835]: time="2025-11-01T00:31:17.392518310Z" level=info msg="StartContainer for \"63aa8be235f17709622be2102aab09990d1ec7749fb136d4320fcc2d67823289\" returns successfully" Nov 1 00:31:17.654780 kubelet[3123]: I1101 00:31:17.654665 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-7dqhk" podStartSLOduration=1.6219635079999999 podStartE2EDuration="3.654652011s" podCreationTimestamp="2025-11-01 00:31:14 +0000 UTC" firstStartedPulling="2025-11-01 00:31:15.318311883 +0000 UTC m=+6.740544367" lastFinishedPulling="2025-11-01 00:31:17.351000389 +0000 UTC m=+8.773232870" observedRunningTime="2025-11-01 00:31:17.65457324 +0000 UTC m=+9.076805737" watchObservedRunningTime="2025-11-01 00:31:17.654652011 +0000 UTC m=+9.076884499" Nov 1 00:31:21.612776 sudo[2149]: pam_unix(sudo:session): session closed for user root Nov 1 00:31:21.613723 sshd[2146]: pam_unix(sshd:session): session closed for user core Nov 1 00:31:21.615518 systemd[1]: sshd@10-139.178.94.145:22-139.178.89.65:49396.service: Deactivated successfully. Nov 1 00:31:21.616469 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:31:21.616576 systemd[1]: session-13.scope: Consumed 3.834s CPU time, 168.2M memory peak, 0B memory swap peak. Nov 1 00:31:21.617301 systemd-logind[1818]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:31:21.617909 systemd-logind[1818]: Removed session 13. Nov 1 00:31:25.787257 systemd[1]: Created slice kubepods-besteffort-podfb28cd1b_a54b_44c3_81e7_24524e4bf341.slice - libcontainer container kubepods-besteffort-podfb28cd1b_a54b_44c3_81e7_24524e4bf341.slice. Nov 1 00:31:25.836010 kubelet[3123]: I1101 00:31:25.835981 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb28cd1b-a54b-44c3-81e7-24524e4bf341-tigera-ca-bundle\") pod \"calico-typha-746dd786cf-4ls5z\" (UID: \"fb28cd1b-a54b-44c3-81e7-24524e4bf341\") " pod="calico-system/calico-typha-746dd786cf-4ls5z" Nov 1 00:31:25.836394 kubelet[3123]: I1101 00:31:25.836023 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb28cd1b-a54b-44c3-81e7-24524e4bf341-typha-certs\") pod \"calico-typha-746dd786cf-4ls5z\" (UID: \"fb28cd1b-a54b-44c3-81e7-24524e4bf341\") " pod="calico-system/calico-typha-746dd786cf-4ls5z" Nov 1 00:31:25.836394 kubelet[3123]: I1101 00:31:25.836045 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs8j\" (UniqueName: \"kubernetes.io/projected/fb28cd1b-a54b-44c3-81e7-24524e4bf341-kube-api-access-wjs8j\") pod \"calico-typha-746dd786cf-4ls5z\" (UID: \"fb28cd1b-a54b-44c3-81e7-24524e4bf341\") " pod="calico-system/calico-typha-746dd786cf-4ls5z" Nov 1 00:31:25.980144 systemd[1]: Created slice kubepods-besteffort-pod62ba5a57_ae62_41b6_8b88_a93d4d7491ef.slice - libcontainer container kubepods-besteffort-pod62ba5a57_ae62_41b6_8b88_a93d4d7491ef.slice. Nov 1 00:31:26.038219 kubelet[3123]: I1101 00:31:26.037936 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-cni-net-dir\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038219 kubelet[3123]: I1101 00:31:26.038023 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-tigera-ca-bundle\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038219 kubelet[3123]: I1101 00:31:26.038087 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzmtn\" (UniqueName: \"kubernetes.io/projected/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-kube-api-access-hzmtn\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038673 kubelet[3123]: I1101 00:31:26.038235 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-xtables-lock\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038673 kubelet[3123]: I1101 00:31:26.038330 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-node-certs\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038673 kubelet[3123]: I1101 00:31:26.038380 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-var-run-calico\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038673 kubelet[3123]: I1101 00:31:26.038433 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-cni-log-dir\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.038673 kubelet[3123]: I1101 00:31:26.038531 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-lib-modules\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.039218 kubelet[3123]: I1101 00:31:26.038588 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-cni-bin-dir\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.039218 kubelet[3123]: I1101 00:31:26.038634 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-flexvol-driver-host\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.039218 kubelet[3123]: I1101 00:31:26.038691 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-policysync\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.039218 kubelet[3123]: I1101 00:31:26.038737 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62ba5a57-ae62-41b6-8b88-a93d4d7491ef-var-lib-calico\") pod \"calico-node-kl2tg\" (UID: \"62ba5a57-ae62-41b6-8b88-a93d4d7491ef\") " pod="calico-system/calico-node-kl2tg" Nov 1 00:31:26.091959 containerd[1835]: time="2025-11-01T00:31:26.091899675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-746dd786cf-4ls5z,Uid:fb28cd1b-a54b-44c3-81e7-24524e4bf341,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:26.101815 containerd[1835]: time="2025-11-01T00:31:26.101546817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:26.101815 containerd[1835]: time="2025-11-01T00:31:26.101805669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:26.101929 containerd[1835]: time="2025-11-01T00:31:26.101817681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:26.101929 containerd[1835]: time="2025-11-01T00:31:26.101875737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:26.116276 systemd[1]: Started cri-containerd-bdeda4c356d4feb9968621a733643812510c015ffc50524c5cd0f17a9db8ac1b.scope - libcontainer container bdeda4c356d4feb9968621a733643812510c015ffc50524c5cd0f17a9db8ac1b. Nov 1 00:31:26.141413 kubelet[3123]: E1101 00:31:26.141369 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.141413 kubelet[3123]: W1101 00:31:26.141387 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.141413 kubelet[3123]: E1101 00:31:26.141413 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.142740 kubelet[3123]: E1101 00:31:26.142726 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.142740 kubelet[3123]: W1101 00:31:26.142739 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.142813 kubelet[3123]: E1101 00:31:26.142753 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.148265 kubelet[3123]: E1101 00:31:26.148222 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:26.150021 kubelet[3123]: E1101 00:31:26.150002 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.150021 kubelet[3123]: W1101 00:31:26.150016 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.150372 kubelet[3123]: E1101 00:31:26.150336 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.150671 containerd[1835]: time="2025-11-01T00:31:26.150640559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-746dd786cf-4ls5z,Uid:fb28cd1b-a54b-44c3-81e7-24524e4bf341,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdeda4c356d4feb9968621a733643812510c015ffc50524c5cd0f17a9db8ac1b\"" Nov 1 00:31:26.152155 containerd[1835]: time="2025-11-01T00:31:26.152133694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:31:26.217043 kubelet[3123]: E1101 00:31:26.216989 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217043 kubelet[3123]: W1101 00:31:26.217009 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217043 kubelet[3123]: E1101 00:31:26.217026 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.217270 kubelet[3123]: E1101 00:31:26.217210 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217270 kubelet[3123]: W1101 00:31:26.217219 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217270 kubelet[3123]: E1101 00:31:26.217227 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.217443 kubelet[3123]: E1101 00:31:26.217406 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217443 kubelet[3123]: W1101 00:31:26.217415 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217443 kubelet[3123]: E1101 00:31:26.217424 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.217642 kubelet[3123]: E1101 00:31:26.217603 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217642 kubelet[3123]: W1101 00:31:26.217612 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217642 kubelet[3123]: E1101 00:31:26.217620 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.217812 kubelet[3123]: E1101 00:31:26.217773 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217812 kubelet[3123]: W1101 00:31:26.217782 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217812 kubelet[3123]: E1101 00:31:26.217790 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.217904 kubelet[3123]: E1101 00:31:26.217898 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.217928 kubelet[3123]: W1101 00:31:26.217904 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.217928 kubelet[3123]: E1101 00:31:26.217910 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218012 kubelet[3123]: E1101 00:31:26.218005 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218012 kubelet[3123]: W1101 00:31:26.218011 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218061 kubelet[3123]: E1101 00:31:26.218017 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218162 kubelet[3123]: E1101 00:31:26.218114 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218162 kubelet[3123]: W1101 00:31:26.218120 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218162 kubelet[3123]: E1101 00:31:26.218126 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218267 kubelet[3123]: E1101 00:31:26.218260 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218267 kubelet[3123]: W1101 00:31:26.218266 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218316 kubelet[3123]: E1101 00:31:26.218272 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218369 kubelet[3123]: E1101 00:31:26.218363 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218395 kubelet[3123]: W1101 00:31:26.218369 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218395 kubelet[3123]: E1101 00:31:26.218374 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218470 kubelet[3123]: E1101 00:31:26.218464 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218495 kubelet[3123]: W1101 00:31:26.218470 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218495 kubelet[3123]: E1101 00:31:26.218476 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218571 kubelet[3123]: E1101 00:31:26.218565 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218599 kubelet[3123]: W1101 00:31:26.218571 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218599 kubelet[3123]: E1101 00:31:26.218577 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218674 kubelet[3123]: E1101 00:31:26.218668 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218697 kubelet[3123]: W1101 00:31:26.218674 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218697 kubelet[3123]: E1101 00:31:26.218679 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218771 kubelet[3123]: E1101 00:31:26.218765 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218797 kubelet[3123]: W1101 00:31:26.218771 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218797 kubelet[3123]: E1101 00:31:26.218777 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218869 kubelet[3123]: E1101 00:31:26.218863 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.218892 kubelet[3123]: W1101 00:31:26.218869 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.218892 kubelet[3123]: E1101 00:31:26.218875 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.218976 kubelet[3123]: E1101 00:31:26.218969 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.219001 kubelet[3123]: W1101 00:31:26.218975 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.219001 kubelet[3123]: E1101 00:31:26.218981 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.219081 kubelet[3123]: E1101 00:31:26.219075 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.219115 kubelet[3123]: W1101 00:31:26.219081 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.219115 kubelet[3123]: E1101 00:31:26.219087 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.219242 kubelet[3123]: E1101 00:31:26.219236 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.219268 kubelet[3123]: W1101 00:31:26.219243 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.219268 kubelet[3123]: E1101 00:31:26.219249 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.219382 kubelet[3123]: E1101 00:31:26.219376 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.219405 kubelet[3123]: W1101 00:31:26.219382 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.219405 kubelet[3123]: E1101 00:31:26.219388 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.219483 kubelet[3123]: E1101 00:31:26.219477 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.219508 kubelet[3123]: W1101 00:31:26.219483 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.219508 kubelet[3123]: E1101 00:31:26.219489 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.241049 kubelet[3123]: E1101 00:31:26.241018 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.241049 kubelet[3123]: W1101 00:31:26.241041 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.241303 kubelet[3123]: E1101 00:31:26.241061 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.241303 kubelet[3123]: I1101 00:31:26.241106 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0245246d-bdc5-450d-b21c-5eff759295d4-varrun\") pod \"csi-node-driver-vvbjm\" (UID: \"0245246d-bdc5-450d-b21c-5eff759295d4\") " pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:26.241440 kubelet[3123]: E1101 00:31:26.241317 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.241440 kubelet[3123]: W1101 00:31:26.241331 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.241440 kubelet[3123]: E1101 00:31:26.241346 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.241440 kubelet[3123]: I1101 00:31:26.241367 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0245246d-bdc5-450d-b21c-5eff759295d4-registration-dir\") pod \"csi-node-driver-vvbjm\" (UID: \"0245246d-bdc5-450d-b21c-5eff759295d4\") " pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:26.241606 kubelet[3123]: E1101 00:31:26.241560 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.241606 kubelet[3123]: W1101 00:31:26.241572 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.241606 kubelet[3123]: E1101 00:31:26.241583 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.241606 kubelet[3123]: I1101 00:31:26.241602 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwvq\" (UniqueName: \"kubernetes.io/projected/0245246d-bdc5-450d-b21c-5eff759295d4-kube-api-access-mwwvq\") pod \"csi-node-driver-vvbjm\" (UID: \"0245246d-bdc5-450d-b21c-5eff759295d4\") " pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:26.241841 kubelet[3123]: E1101 00:31:26.241823 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.241902 kubelet[3123]: W1101 00:31:26.241839 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.241902 kubelet[3123]: E1101 00:31:26.241854 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.242034 kubelet[3123]: E1101 00:31:26.242023 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.242074 kubelet[3123]: W1101 00:31:26.242034 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.242074 kubelet[3123]: E1101 00:31:26.242044 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.242306 kubelet[3123]: E1101 00:31:26.242270 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.242306 kubelet[3123]: W1101 00:31:26.242282 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.242306 kubelet[3123]: E1101 00:31:26.242293 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.242521 kubelet[3123]: E1101 00:31:26.242488 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.242521 kubelet[3123]: W1101 00:31:26.242501 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.242521 kubelet[3123]: E1101 00:31:26.242512 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.242723 kubelet[3123]: E1101 00:31:26.242711 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.242769 kubelet[3123]: W1101 00:31:26.242722 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.242769 kubelet[3123]: E1101 00:31:26.242733 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.242899 kubelet[3123]: I1101 00:31:26.242764 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0245246d-bdc5-450d-b21c-5eff759295d4-kubelet-dir\") pod \"csi-node-driver-vvbjm\" (UID: \"0245246d-bdc5-450d-b21c-5eff759295d4\") " pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:26.242983 kubelet[3123]: E1101 00:31:26.242968 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.243030 kubelet[3123]: W1101 00:31:26.242983 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.243030 kubelet[3123]: E1101 00:31:26.242996 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.243204 kubelet[3123]: E1101 00:31:26.243191 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.243204 kubelet[3123]: W1101 00:31:26.243204 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.243286 kubelet[3123]: E1101 00:31:26.243216 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.243445 kubelet[3123]: E1101 00:31:26.243433 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.243493 kubelet[3123]: W1101 00:31:26.243446 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.243493 kubelet[3123]: E1101 00:31:26.243458 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.243493 kubelet[3123]: I1101 00:31:26.243481 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0245246d-bdc5-450d-b21c-5eff759295d4-socket-dir\") pod \"csi-node-driver-vvbjm\" (UID: \"0245246d-bdc5-450d-b21c-5eff759295d4\") " pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:26.243714 kubelet[3123]: E1101 00:31:26.243699 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.243762 kubelet[3123]: W1101 00:31:26.243715 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.243762 kubelet[3123]: E1101 00:31:26.243729 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.243963 kubelet[3123]: E1101 00:31:26.243951 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.244006 kubelet[3123]: W1101 00:31:26.243963 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.244006 kubelet[3123]: E1101 00:31:26.243977 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.244222 kubelet[3123]: E1101 00:31:26.244208 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.244222 kubelet[3123]: W1101 00:31:26.244221 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.244325 kubelet[3123]: E1101 00:31:26.244233 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.244421 kubelet[3123]: E1101 00:31:26.244409 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.244465 kubelet[3123]: W1101 00:31:26.244421 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.244465 kubelet[3123]: E1101 00:31:26.244433 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.284506 containerd[1835]: time="2025-11-01T00:31:26.284440213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl2tg,Uid:62ba5a57-ae62-41b6-8b88-a93d4d7491ef,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:26.294364 containerd[1835]: time="2025-11-01T00:31:26.294229360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:26.294364 containerd[1835]: time="2025-11-01T00:31:26.294258620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:26.294364 containerd[1835]: time="2025-11-01T00:31:26.294265590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:26.294364 containerd[1835]: time="2025-11-01T00:31:26.294303381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:26.308377 systemd[1]: Started cri-containerd-ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c.scope - libcontainer container ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c. Nov 1 00:31:26.319715 containerd[1835]: time="2025-11-01T00:31:26.319661994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kl2tg,Uid:62ba5a57-ae62-41b6-8b88-a93d4d7491ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\"" Nov 1 00:31:26.344564 kubelet[3123]: E1101 00:31:26.344512 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.344564 kubelet[3123]: W1101 00:31:26.344531 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.344564 kubelet[3123]: E1101 00:31:26.344549 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.344836 kubelet[3123]: E1101 00:31:26.344794 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.344836 kubelet[3123]: W1101 00:31:26.344807 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.344836 kubelet[3123]: E1101 00:31:26.344822 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.345077 kubelet[3123]: E1101 00:31:26.345064 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.345077 kubelet[3123]: W1101 00:31:26.345074 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.345202 kubelet[3123]: E1101 00:31:26.345085 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.345390 kubelet[3123]: E1101 00:31:26.345348 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.345390 kubelet[3123]: W1101 00:31:26.345361 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.345390 kubelet[3123]: E1101 00:31:26.345374 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.345640 kubelet[3123]: E1101 00:31:26.345626 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.345684 kubelet[3123]: W1101 00:31:26.345640 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.345684 kubelet[3123]: E1101 00:31:26.345653 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.345910 kubelet[3123]: E1101 00:31:26.345894 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.345963 kubelet[3123]: W1101 00:31:26.345910 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.345963 kubelet[3123]: E1101 00:31:26.345924 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.346114 kubelet[3123]: E1101 00:31:26.346103 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.346114 kubelet[3123]: W1101 00:31:26.346113 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.346204 kubelet[3123]: E1101 00:31:26.346124 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.346312 kubelet[3123]: E1101 00:31:26.346300 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.346353 kubelet[3123]: W1101 00:31:26.346311 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.346353 kubelet[3123]: E1101 00:31:26.346322 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.346519 kubelet[3123]: E1101 00:31:26.346509 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.346555 kubelet[3123]: W1101 00:31:26.346519 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.346555 kubelet[3123]: E1101 00:31:26.346530 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.346693 kubelet[3123]: E1101 00:31:26.346683 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.346733 kubelet[3123]: W1101 00:31:26.346692 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.346733 kubelet[3123]: E1101 00:31:26.346702 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.346865 kubelet[3123]: E1101 00:31:26.346856 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.346905 kubelet[3123]: W1101 00:31:26.346865 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.346905 kubelet[3123]: E1101 00:31:26.346874 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.347025 kubelet[3123]: E1101 00:31:26.347015 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.347025 kubelet[3123]: W1101 00:31:26.347024 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.347103 kubelet[3123]: E1101 00:31:26.347033 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.347342 kubelet[3123]: E1101 00:31:26.347327 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.347401 kubelet[3123]: W1101 00:31:26.347341 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.347401 kubelet[3123]: E1101 00:31:26.347356 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.347581 kubelet[3123]: E1101 00:31:26.347568 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.347628 kubelet[3123]: W1101 00:31:26.347581 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.347628 kubelet[3123]: E1101 00:31:26.347594 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.347855 kubelet[3123]: E1101 00:31:26.347841 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.347855 kubelet[3123]: W1101 00:31:26.347853 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.347972 kubelet[3123]: E1101 00:31:26.347866 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.348111 kubelet[3123]: E1101 00:31:26.348088 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.348111 kubelet[3123]: W1101 00:31:26.348110 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.348214 kubelet[3123]: E1101 00:31:26.348122 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.348307 kubelet[3123]: E1101 00:31:26.348295 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.348354 kubelet[3123]: W1101 00:31:26.348306 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.348354 kubelet[3123]: E1101 00:31:26.348316 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.348543 kubelet[3123]: E1101 00:31:26.348532 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.348543 kubelet[3123]: W1101 00:31:26.348543 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.348621 kubelet[3123]: E1101 00:31:26.348553 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.348765 kubelet[3123]: E1101 00:31:26.348754 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.348815 kubelet[3123]: W1101 00:31:26.348764 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.348815 kubelet[3123]: E1101 00:31:26.348775 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.348948 kubelet[3123]: E1101 00:31:26.348937 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.348995 kubelet[3123]: W1101 00:31:26.348949 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.348995 kubelet[3123]: E1101 00:31:26.348959 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.349208 kubelet[3123]: E1101 00:31:26.349192 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.349257 kubelet[3123]: W1101 00:31:26.349209 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.349257 kubelet[3123]: E1101 00:31:26.349224 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.349462 kubelet[3123]: E1101 00:31:26.349449 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.349505 kubelet[3123]: W1101 00:31:26.349462 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.349505 kubelet[3123]: E1101 00:31:26.349475 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.349731 kubelet[3123]: E1101 00:31:26.349718 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.349731 kubelet[3123]: W1101 00:31:26.349729 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.349829 kubelet[3123]: E1101 00:31:26.349740 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.350001 kubelet[3123]: E1101 00:31:26.349989 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.350001 kubelet[3123]: W1101 00:31:26.350000 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.350083 kubelet[3123]: E1101 00:31:26.350011 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.350231 kubelet[3123]: E1101 00:31:26.350220 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.350284 kubelet[3123]: W1101 00:31:26.350231 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.350284 kubelet[3123]: E1101 00:31:26.350242 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:26.360810 kubelet[3123]: E1101 00:31:26.360760 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:26.360810 kubelet[3123]: W1101 00:31:26.360784 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:26.360810 kubelet[3123]: E1101 00:31:26.360807 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:27.452993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537382299.mount: Deactivated successfully. Nov 1 00:31:27.800025 containerd[1835]: time="2025-11-01T00:31:27.799974197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:27.800255 containerd[1835]: time="2025-11-01T00:31:27.800157946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:31:27.800577 containerd[1835]: time="2025-11-01T00:31:27.800538195Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:27.801466 containerd[1835]: time="2025-11-01T00:31:27.801426737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:27.801894 containerd[1835]: time="2025-11-01T00:31:27.801850934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.649692018s" Nov 1 00:31:27.801894 containerd[1835]: time="2025-11-01T00:31:27.801868683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:31:27.802420 containerd[1835]: time="2025-11-01T00:31:27.802379601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:31:27.806087 containerd[1835]: time="2025-11-01T00:31:27.806068158Z" level=info msg="CreateContainer within sandbox \"bdeda4c356d4feb9968621a733643812510c015ffc50524c5cd0f17a9db8ac1b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:31:27.810233 containerd[1835]: time="2025-11-01T00:31:27.810220648Z" level=info msg="CreateContainer within sandbox \"bdeda4c356d4feb9968621a733643812510c015ffc50524c5cd0f17a9db8ac1b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8f9a23bdda7e321fb9a853977b3e582713dc326e8d809ef1429f18c809077625\"" Nov 1 00:31:27.810465 containerd[1835]: time="2025-11-01T00:31:27.810453352Z" level=info msg="StartContainer for \"8f9a23bdda7e321fb9a853977b3e582713dc326e8d809ef1429f18c809077625\"" Nov 1 00:31:27.840390 systemd[1]: Started cri-containerd-8f9a23bdda7e321fb9a853977b3e582713dc326e8d809ef1429f18c809077625.scope - libcontainer container 8f9a23bdda7e321fb9a853977b3e582713dc326e8d809ef1429f18c809077625. Nov 1 00:31:27.875066 containerd[1835]: time="2025-11-01T00:31:27.875036069Z" level=info msg="StartContainer for \"8f9a23bdda7e321fb9a853977b3e582713dc326e8d809ef1429f18c809077625\" returns successfully" Nov 1 00:31:28.621667 kubelet[3123]: E1101 00:31:28.621584 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:28.702040 kubelet[3123]: I1101 00:31:28.701929 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-746dd786cf-4ls5z" podStartSLOduration=2.05141973 podStartE2EDuration="3.701891779s" podCreationTimestamp="2025-11-01 00:31:25 +0000 UTC" firstStartedPulling="2025-11-01 00:31:26.151864742 +0000 UTC m=+17.574097231" lastFinishedPulling="2025-11-01 00:31:27.8023368 +0000 UTC m=+19.224569280" observedRunningTime="2025-11-01 00:31:28.701350695 +0000 UTC m=+20.123583247" watchObservedRunningTime="2025-11-01 00:31:28.701891779 +0000 UTC m=+20.124124314" Nov 1 00:31:28.735680 kubelet[3123]: E1101 00:31:28.735634 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.735680 kubelet[3123]: W1101 00:31:28.735649 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.735680 kubelet[3123]: E1101 00:31:28.735662 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.735852 kubelet[3123]: E1101 00:31:28.735813 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.735852 kubelet[3123]: W1101 00:31:28.735820 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.735852 kubelet[3123]: E1101 00:31:28.735826 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.735972 kubelet[3123]: E1101 00:31:28.735966 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.735972 kubelet[3123]: W1101 00:31:28.735971 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736014 kubelet[3123]: E1101 00:31:28.735977 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736221 kubelet[3123]: E1101 00:31:28.736184 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736221 kubelet[3123]: W1101 00:31:28.736191 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736221 kubelet[3123]: E1101 00:31:28.736197 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736384 kubelet[3123]: E1101 00:31:28.736349 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736384 kubelet[3123]: W1101 00:31:28.736354 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736384 kubelet[3123]: E1101 00:31:28.736359 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736476 kubelet[3123]: E1101 00:31:28.736465 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736476 kubelet[3123]: W1101 00:31:28.736469 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736476 kubelet[3123]: E1101 00:31:28.736474 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736587 kubelet[3123]: E1101 00:31:28.736553 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736587 kubelet[3123]: W1101 00:31:28.736558 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736587 kubelet[3123]: E1101 00:31:28.736562 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736657 kubelet[3123]: E1101 00:31:28.736632 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736657 kubelet[3123]: W1101 00:31:28.736636 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736657 kubelet[3123]: E1101 00:31:28.736640 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736759 kubelet[3123]: E1101 00:31:28.736754 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736759 kubelet[3123]: W1101 00:31:28.736759 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736791 kubelet[3123]: E1101 00:31:28.736763 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736833 kubelet[3123]: E1101 00:31:28.736829 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736851 kubelet[3123]: W1101 00:31:28.736833 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736851 kubelet[3123]: E1101 00:31:28.736838 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736907 kubelet[3123]: E1101 00:31:28.736903 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736928 kubelet[3123]: W1101 00:31:28.736907 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736928 kubelet[3123]: E1101 00:31:28.736911 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.736981 kubelet[3123]: E1101 00:31:28.736977 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.736999 kubelet[3123]: W1101 00:31:28.736981 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.736999 kubelet[3123]: E1101 00:31:28.736985 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.737059 kubelet[3123]: E1101 00:31:28.737055 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.737079 kubelet[3123]: W1101 00:31:28.737059 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.737079 kubelet[3123]: E1101 00:31:28.737064 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.737138 kubelet[3123]: E1101 00:31:28.737134 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.737138 kubelet[3123]: W1101 00:31:28.737138 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.737172 kubelet[3123]: E1101 00:31:28.737142 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.737216 kubelet[3123]: E1101 00:31:28.737211 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.737236 kubelet[3123]: W1101 00:31:28.737216 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.737236 kubelet[3123]: E1101 00:31:28.737220 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.762576 kubelet[3123]: E1101 00:31:28.762538 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.762576 kubelet[3123]: W1101 00:31:28.762547 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.762576 kubelet[3123]: E1101 00:31:28.762557 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.762766 kubelet[3123]: E1101 00:31:28.762716 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.762766 kubelet[3123]: W1101 00:31:28.762724 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.762766 kubelet[3123]: E1101 00:31:28.762732 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.762934 kubelet[3123]: E1101 00:31:28.762893 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.762934 kubelet[3123]: W1101 00:31:28.762901 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.762934 kubelet[3123]: E1101 00:31:28.762908 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763059 kubelet[3123]: E1101 00:31:28.763051 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.763059 kubelet[3123]: W1101 00:31:28.763058 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.763123 kubelet[3123]: E1101 00:31:28.763065 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763231 kubelet[3123]: E1101 00:31:28.763188 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.763231 kubelet[3123]: W1101 00:31:28.763195 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.763231 kubelet[3123]: E1101 00:31:28.763201 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763386 kubelet[3123]: E1101 00:31:28.763333 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.763386 kubelet[3123]: W1101 00:31:28.763341 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.763386 kubelet[3123]: E1101 00:31:28.763349 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763555 kubelet[3123]: E1101 00:31:28.763515 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.763555 kubelet[3123]: W1101 00:31:28.763521 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.763555 kubelet[3123]: E1101 00:31:28.763528 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763824 kubelet[3123]: E1101 00:31:28.763810 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.763852 kubelet[3123]: W1101 00:31:28.763826 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.763852 kubelet[3123]: E1101 00:31:28.763837 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.763991 kubelet[3123]: E1101 00:31:28.763983 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764017 kubelet[3123]: W1101 00:31:28.763991 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764017 kubelet[3123]: E1101 00:31:28.763999 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764128 kubelet[3123]: E1101 00:31:28.764119 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764128 kubelet[3123]: W1101 00:31:28.764126 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764191 kubelet[3123]: E1101 00:31:28.764133 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764284 kubelet[3123]: E1101 00:31:28.764276 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764284 kubelet[3123]: W1101 00:31:28.764283 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764332 kubelet[3123]: E1101 00:31:28.764289 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764456 kubelet[3123]: E1101 00:31:28.764449 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764481 kubelet[3123]: W1101 00:31:28.764456 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764481 kubelet[3123]: E1101 00:31:28.764463 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764628 kubelet[3123]: E1101 00:31:28.764620 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764660 kubelet[3123]: W1101 00:31:28.764628 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764660 kubelet[3123]: E1101 00:31:28.764635 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764850 kubelet[3123]: E1101 00:31:28.764839 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.764881 kubelet[3123]: W1101 00:31:28.764850 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.764881 kubelet[3123]: E1101 00:31:28.764859 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.764986 kubelet[3123]: E1101 00:31:28.764978 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.765015 kubelet[3123]: W1101 00:31:28.764986 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.765015 kubelet[3123]: E1101 00:31:28.764994 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.765173 kubelet[3123]: E1101 00:31:28.765164 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.765173 kubelet[3123]: W1101 00:31:28.765172 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.765241 kubelet[3123]: E1101 00:31:28.765179 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.765419 kubelet[3123]: E1101 00:31:28.765409 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.765448 kubelet[3123]: W1101 00:31:28.765419 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.765448 kubelet[3123]: E1101 00:31:28.765428 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:28.765618 kubelet[3123]: E1101 00:31:28.765589 3123 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:31:28.765618 kubelet[3123]: W1101 00:31:28.765597 3123 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:31:28.765618 kubelet[3123]: E1101 00:31:28.765604 3123 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:31:29.098702 containerd[1835]: time="2025-11-01T00:31:29.098651724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:29.098907 containerd[1835]: time="2025-11-01T00:31:29.098853599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:31:29.099168 containerd[1835]: time="2025-11-01T00:31:29.099125709Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:29.100165 containerd[1835]: time="2025-11-01T00:31:29.100112973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:29.100574 containerd[1835]: time="2025-11-01T00:31:29.100530990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.298135086s" Nov 1 00:31:29.100574 containerd[1835]: time="2025-11-01T00:31:29.100547091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:31:29.102021 containerd[1835]: time="2025-11-01T00:31:29.102010479Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:31:29.106672 containerd[1835]: time="2025-11-01T00:31:29.106656670Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498\"" Nov 1 00:31:29.106853 containerd[1835]: time="2025-11-01T00:31:29.106842950Z" level=info msg="StartContainer for \"41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498\"" Nov 1 00:31:29.140278 systemd[1]: Started cri-containerd-41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498.scope - libcontainer container 41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498. Nov 1 00:31:29.170055 containerd[1835]: time="2025-11-01T00:31:29.170022571Z" level=info msg="StartContainer for \"41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498\" returns successfully" Nov 1 00:31:29.170349 systemd[1]: cri-containerd-41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498.scope: Deactivated successfully. Nov 1 00:31:29.186266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498-rootfs.mount: Deactivated successfully. Nov 1 00:31:29.641433 containerd[1835]: time="2025-11-01T00:31:29.641394951Z" level=info msg="shim disconnected" id=41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498 namespace=k8s.io Nov 1 00:31:29.641433 containerd[1835]: time="2025-11-01T00:31:29.641430874Z" level=warning msg="cleaning up after shim disconnected" id=41df01e0bcf6f872e110e8b675a5e943647ac0a5c46876f6fc2474494d030498 namespace=k8s.io Nov 1 00:31:29.641433 containerd[1835]: time="2025-11-01T00:31:29.641436304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:31:29.689738 containerd[1835]: time="2025-11-01T00:31:29.689664837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:31:30.621695 kubelet[3123]: E1101 00:31:30.621574 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:31.922418 containerd[1835]: time="2025-11-01T00:31:31.922392868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:31.922644 containerd[1835]: time="2025-11-01T00:31:31.922585307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:31:31.923076 containerd[1835]: time="2025-11-01T00:31:31.923061734Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:31.924084 containerd[1835]: time="2025-11-01T00:31:31.924073425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:31.924524 containerd[1835]: time="2025-11-01T00:31:31.924509775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.234779369s" Nov 1 00:31:31.924547 containerd[1835]: time="2025-11-01T00:31:31.924527817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:31:31.926067 containerd[1835]: time="2025-11-01T00:31:31.926054561Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:31:31.930607 containerd[1835]: time="2025-11-01T00:31:31.930564346Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6\"" Nov 1 00:31:31.930787 containerd[1835]: time="2025-11-01T00:31:31.930775449Z" level=info msg="StartContainer for \"afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6\"" Nov 1 00:31:31.954658 systemd[1]: Started cri-containerd-afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6.scope - libcontainer container afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6. Nov 1 00:31:32.006106 containerd[1835]: time="2025-11-01T00:31:32.006063423Z" level=info msg="StartContainer for \"afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6\" returns successfully" Nov 1 00:31:32.594828 systemd[1]: cri-containerd-afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6.scope: Deactivated successfully. Nov 1 00:31:32.605688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6-rootfs.mount: Deactivated successfully. Nov 1 00:31:32.621117 kubelet[3123]: E1101 00:31:32.620963 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:32.658893 kubelet[3123]: I1101 00:31:32.658833 3123 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:31:32.749419 systemd[1]: Created slice kubepods-burstable-podbd354653_92ce_413f_9189_183709f503cd.slice - libcontainer container kubepods-burstable-podbd354653_92ce_413f_9189_183709f503cd.slice. Nov 1 00:31:32.793879 kubelet[3123]: I1101 00:31:32.793773 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffmpd\" (UniqueName: \"kubernetes.io/projected/bd354653-92ce-413f-9189-183709f503cd-kube-api-access-ffmpd\") pod \"coredns-66bc5c9577-6d7s5\" (UID: \"bd354653-92ce-413f-9189-183709f503cd\") " pod="kube-system/coredns-66bc5c9577-6d7s5" Nov 1 00:31:32.794168 kubelet[3123]: I1101 00:31:32.793923 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd354653-92ce-413f-9189-183709f503cd-config-volume\") pod \"coredns-66bc5c9577-6d7s5\" (UID: \"bd354653-92ce-413f-9189-183709f503cd\") " pod="kube-system/coredns-66bc5c9577-6d7s5" Nov 1 00:31:32.828187 systemd[1]: Created slice kubepods-burstable-pod9a95ee66_2c19_4f09_bcfd_9c4e55da76e6.slice - libcontainer container kubepods-burstable-pod9a95ee66_2c19_4f09_bcfd_9c4e55da76e6.slice. Nov 1 00:31:32.845589 systemd[1]: Created slice kubepods-besteffort-pod055e53bc_992b_4781_aa59_63b9452c2f8e.slice - libcontainer container kubepods-besteffort-pod055e53bc_992b_4781_aa59_63b9452c2f8e.slice. Nov 1 00:31:32.894998 kubelet[3123]: I1101 00:31:32.894887 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a95ee66-2c19-4f09-bcfd-9c4e55da76e6-config-volume\") pod \"coredns-66bc5c9577-vcqfb\" (UID: \"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6\") " pod="kube-system/coredns-66bc5c9577-vcqfb" Nov 1 00:31:32.894998 kubelet[3123]: I1101 00:31:32.894975 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzmb\" (UniqueName: \"kubernetes.io/projected/9a95ee66-2c19-4f09-bcfd-9c4e55da76e6-kube-api-access-txzmb\") pod \"coredns-66bc5c9577-vcqfb\" (UID: \"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6\") " pod="kube-system/coredns-66bc5c9577-vcqfb" Nov 1 00:31:32.895495 kubelet[3123]: I1101 00:31:32.895039 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055e53bc-992b-4781-aa59-63b9452c2f8e-tigera-ca-bundle\") pod \"calico-kube-controllers-75c65644df-6srvh\" (UID: \"055e53bc-992b-4781-aa59-63b9452c2f8e\") " pod="calico-system/calico-kube-controllers-75c65644df-6srvh" Nov 1 00:31:32.895495 kubelet[3123]: I1101 00:31:32.895232 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvrnr\" (UniqueName: \"kubernetes.io/projected/055e53bc-992b-4781-aa59-63b9452c2f8e-kube-api-access-nvrnr\") pod \"calico-kube-controllers-75c65644df-6srvh\" (UID: \"055e53bc-992b-4781-aa59-63b9452c2f8e\") " pod="calico-system/calico-kube-controllers-75c65644df-6srvh" Nov 1 00:31:32.937243 systemd[1]: Created slice kubepods-besteffort-pod3be7fc93_dc6d_492b_bf0b_0eb6ed63fef5.slice - libcontainer container kubepods-besteffort-pod3be7fc93_dc6d_492b_bf0b_0eb6ed63fef5.slice. Nov 1 00:31:32.984599 systemd[1]: Created slice kubepods-besteffort-pod68ef77d9_c28e_4552_8ad9_f26358f8691b.slice - libcontainer container kubepods-besteffort-pod68ef77d9_c28e_4552_8ad9_f26358f8691b.slice. Nov 1 00:31:32.998560 kubelet[3123]: I1101 00:31:32.996359 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5-calico-apiserver-certs\") pod \"calico-apiserver-57458876-7nj5x\" (UID: \"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5\") " pod="calico-apiserver/calico-apiserver-57458876-7nj5x" Nov 1 00:31:32.998560 kubelet[3123]: I1101 00:31:32.996409 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqqpv\" (UniqueName: \"kubernetes.io/projected/3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5-kube-api-access-cqqpv\") pod \"calico-apiserver-57458876-7nj5x\" (UID: \"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5\") " pod="calico-apiserver/calico-apiserver-57458876-7nj5x" Nov 1 00:31:33.001919 containerd[1835]: time="2025-11-01T00:31:33.001872610Z" level=info msg="shim disconnected" id=afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6 namespace=k8s.io Nov 1 00:31:33.001919 containerd[1835]: time="2025-11-01T00:31:33.001919946Z" level=warning msg="cleaning up after shim disconnected" id=afe1369c13c5ed8d5e1a5a86169f2fbae6728392c7e7f7f9968be889cd6543f6 namespace=k8s.io Nov 1 00:31:33.002206 containerd[1835]: time="2025-11-01T00:31:33.001929347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:31:33.011485 systemd[1]: Created slice kubepods-besteffort-pod0e047d2f_1491_42f0_a675_eff64087e5dd.slice - libcontainer container kubepods-besteffort-pod0e047d2f_1491_42f0_a675_eff64087e5dd.slice. Nov 1 00:31:33.013669 systemd[1]: Created slice kubepods-besteffort-pod54e9db7b_cb35_4455_991d_efa82c12e14b.slice - libcontainer container kubepods-besteffort-pod54e9db7b_cb35_4455_991d_efa82c12e14b.slice. Nov 1 00:31:33.055918 containerd[1835]: time="2025-11-01T00:31:33.055852143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d7s5,Uid:bd354653-92ce-413f-9189-183709f503cd,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:33.084529 containerd[1835]: time="2025-11-01T00:31:33.084502200Z" level=error msg="Failed to destroy network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.084704 containerd[1835]: time="2025-11-01T00:31:33.084689757Z" level=error msg="encountered an error cleaning up failed sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.084747 containerd[1835]: time="2025-11-01T00:31:33.084721683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d7s5,Uid:bd354653-92ce-413f-9189-183709f503cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.084899 kubelet[3123]: E1101 00:31:33.084878 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.084944 kubelet[3123]: E1101 00:31:33.084924 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6d7s5" Nov 1 00:31:33.084944 kubelet[3123]: E1101 00:31:33.084938 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6d7s5" Nov 1 00:31:33.085011 kubelet[3123]: E1101 00:31:33.084973 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6d7s5_kube-system(bd354653-92ce-413f-9189-183709f503cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6d7s5_kube-system(bd354653-92ce-413f-9189-183709f503cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6d7s5" podUID="bd354653-92ce-413f-9189-183709f503cd" Nov 1 00:31:33.085864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e-shm.mount: Deactivated successfully. Nov 1 00:31:33.097452 kubelet[3123]: I1101 00:31:33.097385 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/68ef77d9-c28e-4552-8ad9-f26358f8691b-calico-apiserver-certs\") pod \"calico-apiserver-57458876-8h7pk\" (UID: \"68ef77d9-c28e-4552-8ad9-f26358f8691b\") " pod="calico-apiserver/calico-apiserver-57458876-8h7pk" Nov 1 00:31:33.097452 kubelet[3123]: I1101 00:31:33.097405 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-backend-key-pair\") pod \"whisker-6f588c5579-5fbf4\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " pod="calico-system/whisker-6f588c5579-5fbf4" Nov 1 00:31:33.097452 kubelet[3123]: I1101 00:31:33.097418 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e047d2f-1491-42f0-a675-eff64087e5dd-config\") pod \"goldmane-7c778bb748-t7ml5\" (UID: \"0e047d2f-1491-42f0-a675-eff64087e5dd\") " pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.097452 kubelet[3123]: I1101 00:31:33.097439 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khb6\" (UniqueName: \"kubernetes.io/projected/68ef77d9-c28e-4552-8ad9-f26358f8691b-kube-api-access-5khb6\") pod \"calico-apiserver-57458876-8h7pk\" (UID: \"68ef77d9-c28e-4552-8ad9-f26358f8691b\") " pod="calico-apiserver/calico-apiserver-57458876-8h7pk" Nov 1 00:31:33.097546 kubelet[3123]: I1101 00:31:33.097455 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e047d2f-1491-42f0-a675-eff64087e5dd-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-t7ml5\" (UID: \"0e047d2f-1491-42f0-a675-eff64087e5dd\") " pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.097546 kubelet[3123]: I1101 00:31:33.097465 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-ca-bundle\") pod \"whisker-6f588c5579-5fbf4\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " pod="calico-system/whisker-6f588c5579-5fbf4" Nov 1 00:31:33.097546 kubelet[3123]: I1101 00:31:33.097495 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0e047d2f-1491-42f0-a675-eff64087e5dd-goldmane-key-pair\") pod \"goldmane-7c778bb748-t7ml5\" (UID: \"0e047d2f-1491-42f0-a675-eff64087e5dd\") " pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.097546 kubelet[3123]: I1101 00:31:33.097512 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2sr\" (UniqueName: \"kubernetes.io/projected/0e047d2f-1491-42f0-a675-eff64087e5dd-kube-api-access-jk2sr\") pod \"goldmane-7c778bb748-t7ml5\" (UID: \"0e047d2f-1491-42f0-a675-eff64087e5dd\") " pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.097546 kubelet[3123]: I1101 00:31:33.097530 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plq8j\" (UniqueName: \"kubernetes.io/projected/54e9db7b-cb35-4455-991d-efa82c12e14b-kube-api-access-plq8j\") pod \"whisker-6f588c5579-5fbf4\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " pod="calico-system/whisker-6f588c5579-5fbf4" Nov 1 00:31:33.133260 containerd[1835]: time="2025-11-01T00:31:33.133222483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vcqfb,Uid:9a95ee66-2c19-4f09-bcfd-9c4e55da76e6,Namespace:kube-system,Attempt:0,}" Nov 1 00:31:33.150316 containerd[1835]: time="2025-11-01T00:31:33.150266135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c65644df-6srvh,Uid:055e53bc-992b-4781-aa59-63b9452c2f8e,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:33.159965 containerd[1835]: time="2025-11-01T00:31:33.159935518Z" level=error msg="Failed to destroy network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.160131 containerd[1835]: time="2025-11-01T00:31:33.160118780Z" level=error msg="encountered an error cleaning up failed sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.160162 containerd[1835]: time="2025-11-01T00:31:33.160151974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vcqfb,Uid:9a95ee66-2c19-4f09-bcfd-9c4e55da76e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.160319 kubelet[3123]: E1101 00:31:33.160294 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.160358 kubelet[3123]: E1101 00:31:33.160335 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vcqfb" Nov 1 00:31:33.160378 kubelet[3123]: E1101 00:31:33.160363 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vcqfb" Nov 1 00:31:33.160409 kubelet[3123]: E1101 00:31:33.160396 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vcqfb_kube-system(9a95ee66-2c19-4f09-bcfd-9c4e55da76e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vcqfb_kube-system(9a95ee66-2c19-4f09-bcfd-9c4e55da76e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vcqfb" podUID="9a95ee66-2c19-4f09-bcfd-9c4e55da76e6" Nov 1 00:31:33.176084 containerd[1835]: time="2025-11-01T00:31:33.176056260Z" level=error msg="Failed to destroy network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.176285 containerd[1835]: time="2025-11-01T00:31:33.176247959Z" level=error msg="encountered an error cleaning up failed sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.176285 containerd[1835]: time="2025-11-01T00:31:33.176276339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c65644df-6srvh,Uid:055e53bc-992b-4781-aa59-63b9452c2f8e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.176454 kubelet[3123]: E1101 00:31:33.176422 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.176494 kubelet[3123]: E1101 00:31:33.176468 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" Nov 1 00:31:33.176494 kubelet[3123]: E1101 00:31:33.176481 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" Nov 1 00:31:33.176534 kubelet[3123]: E1101 00:31:33.176512 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:31:33.281667 containerd[1835]: time="2025-11-01T00:31:33.281597993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-7nj5x,Uid:3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:31:33.303006 containerd[1835]: time="2025-11-01T00:31:33.302982672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-8h7pk,Uid:68ef77d9-c28e-4552-8ad9-f26358f8691b,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:31:33.308259 containerd[1835]: time="2025-11-01T00:31:33.308212974Z" level=error msg="Failed to destroy network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.308387 containerd[1835]: time="2025-11-01T00:31:33.308363401Z" level=error msg="encountered an error cleaning up failed sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.308409 containerd[1835]: time="2025-11-01T00:31:33.308390988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-7nj5x,Uid:3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.308537 kubelet[3123]: E1101 00:31:33.308518 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.308581 kubelet[3123]: E1101 00:31:33.308548 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" Nov 1 00:31:33.308581 kubelet[3123]: E1101 00:31:33.308561 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" Nov 1 00:31:33.308626 kubelet[3123]: E1101 00:31:33.308594 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:31:33.325031 containerd[1835]: time="2025-11-01T00:31:33.324980828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t7ml5,Uid:0e047d2f-1491-42f0-a675-eff64087e5dd,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:33.325701 containerd[1835]: time="2025-11-01T00:31:33.325646875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f588c5579-5fbf4,Uid:54e9db7b-cb35-4455-991d-efa82c12e14b,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:33.352191 containerd[1835]: time="2025-11-01T00:31:33.352084374Z" level=error msg="Failed to destroy network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.352288 containerd[1835]: time="2025-11-01T00:31:33.352274420Z" level=error msg="encountered an error cleaning up failed sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.352316 containerd[1835]: time="2025-11-01T00:31:33.352303517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-8h7pk,Uid:68ef77d9-c28e-4552-8ad9-f26358f8691b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.352490 kubelet[3123]: E1101 00:31:33.352459 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.352522 kubelet[3123]: E1101 00:31:33.352512 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" Nov 1 00:31:33.352548 kubelet[3123]: E1101 00:31:33.352531 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" Nov 1 00:31:33.352593 kubelet[3123]: E1101 00:31:33.352576 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:31:33.354081 containerd[1835]: time="2025-11-01T00:31:33.354063177Z" level=error msg="Failed to destroy network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.354262 containerd[1835]: time="2025-11-01T00:31:33.354220463Z" level=error msg="encountered an error cleaning up failed sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.354262 containerd[1835]: time="2025-11-01T00:31:33.354250763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t7ml5,Uid:0e047d2f-1491-42f0-a675-eff64087e5dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.354429 kubelet[3123]: E1101 00:31:33.354383 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.354429 kubelet[3123]: E1101 00:31:33.354412 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.354429 kubelet[3123]: E1101 00:31:33.354425 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t7ml5" Nov 1 00:31:33.354537 kubelet[3123]: E1101 00:31:33.354462 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:31:33.355561 containerd[1835]: time="2025-11-01T00:31:33.355547512Z" level=error msg="Failed to destroy network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.355716 containerd[1835]: time="2025-11-01T00:31:33.355678149Z" level=error msg="encountered an error cleaning up failed sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.355716 containerd[1835]: time="2025-11-01T00:31:33.355700329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f588c5579-5fbf4,Uid:54e9db7b-cb35-4455-991d-efa82c12e14b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.355830 kubelet[3123]: E1101 00:31:33.355809 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.355857 kubelet[3123]: E1101 00:31:33.355830 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f588c5579-5fbf4" Nov 1 00:31:33.355857 kubelet[3123]: E1101 00:31:33.355840 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f588c5579-5fbf4" Nov 1 00:31:33.355899 kubelet[3123]: E1101 00:31:33.355861 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f588c5579-5fbf4_calico-system(54e9db7b-cb35-4455-991d-efa82c12e14b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f588c5579-5fbf4_calico-system(54e9db7b-cb35-4455-991d-efa82c12e14b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f588c5579-5fbf4" podUID="54e9db7b-cb35-4455-991d-efa82c12e14b" Nov 1 00:31:33.712538 kubelet[3123]: I1101 00:31:33.712334 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:33.713766 containerd[1835]: time="2025-11-01T00:31:33.712313064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:31:33.714042 containerd[1835]: time="2025-11-01T00:31:33.713948166Z" level=info msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" Nov 1 00:31:33.714665 containerd[1835]: time="2025-11-01T00:31:33.714578767Z" level=info msg="Ensure that sandbox d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c in task-service has been cleanup successfully" Nov 1 00:31:33.715237 kubelet[3123]: I1101 00:31:33.715178 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:33.716225 containerd[1835]: time="2025-11-01T00:31:33.716210039Z" level=info msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" Nov 1 00:31:33.716336 containerd[1835]: time="2025-11-01T00:31:33.716324965Z" level=info msg="Ensure that sandbox 0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b in task-service has been cleanup successfully" Nov 1 00:31:33.716503 kubelet[3123]: I1101 00:31:33.716493 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:33.716736 containerd[1835]: time="2025-11-01T00:31:33.716723078Z" level=info msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" Nov 1 00:31:33.716833 containerd[1835]: time="2025-11-01T00:31:33.716823017Z" level=info msg="Ensure that sandbox 664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf in task-service has been cleanup successfully" Nov 1 00:31:33.717004 kubelet[3123]: I1101 00:31:33.716987 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:33.717292 containerd[1835]: time="2025-11-01T00:31:33.717279357Z" level=info msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" Nov 1 00:31:33.717379 containerd[1835]: time="2025-11-01T00:31:33.717365717Z" level=info msg="Ensure that sandbox 60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9 in task-service has been cleanup successfully" Nov 1 00:31:33.717486 kubelet[3123]: I1101 00:31:33.717477 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:33.717711 containerd[1835]: time="2025-11-01T00:31:33.717693344Z" level=info msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" Nov 1 00:31:33.717821 containerd[1835]: time="2025-11-01T00:31:33.717811131Z" level=info msg="Ensure that sandbox 7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c in task-service has been cleanup successfully" Nov 1 00:31:33.718094 kubelet[3123]: I1101 00:31:33.718073 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:33.718392 containerd[1835]: time="2025-11-01T00:31:33.718368855Z" level=info msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" Nov 1 00:31:33.718518 containerd[1835]: time="2025-11-01T00:31:33.718505500Z" level=info msg="Ensure that sandbox 666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b in task-service has been cleanup successfully" Nov 1 00:31:33.718683 kubelet[3123]: I1101 00:31:33.718671 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:33.719142 containerd[1835]: time="2025-11-01T00:31:33.719123735Z" level=info msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" Nov 1 00:31:33.719238 containerd[1835]: time="2025-11-01T00:31:33.719227158Z" level=info msg="Ensure that sandbox 4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e in task-service has been cleanup successfully" Nov 1 00:31:33.733516 containerd[1835]: time="2025-11-01T00:31:33.733485670Z" level=error msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" failed" error="failed to destroy network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.733652 containerd[1835]: time="2025-11-01T00:31:33.733539344Z" level=error msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" failed" error="failed to destroy network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.733737 kubelet[3123]: E1101 00:31:33.733716 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:33.733768 containerd[1835]: time="2025-11-01T00:31:33.733752423Z" level=error msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" failed" error="failed to destroy network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.733796 kubelet[3123]: E1101 00:31:33.733756 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e"} Nov 1 00:31:33.733815 kubelet[3123]: E1101 00:31:33.733796 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd354653-92ce-413f-9189-183709f503cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.733856 kubelet[3123]: E1101 00:31:33.733716 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:33.733856 kubelet[3123]: E1101 00:31:33.733815 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd354653-92ce-413f-9189-183709f503cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6d7s5" podUID="bd354653-92ce-413f-9189-183709f503cd" Nov 1 00:31:33.733856 kubelet[3123]: E1101 00:31:33.733820 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:33.733856 kubelet[3123]: E1101 00:31:33.733844 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf"} Nov 1 00:31:33.733936 kubelet[3123]: E1101 00:31:33.733858 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"055e53bc-992b-4781-aa59-63b9452c2f8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.733936 kubelet[3123]: E1101 00:31:33.733874 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"055e53bc-992b-4781-aa59-63b9452c2f8e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:31:33.733936 kubelet[3123]: E1101 00:31:33.733828 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9"} Nov 1 00:31:33.733936 kubelet[3123]: E1101 00:31:33.733896 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e047d2f-1491-42f0-a675-eff64087e5dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.734035 containerd[1835]: time="2025-11-01T00:31:33.733911943Z" level=error msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" failed" error="failed to destroy network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.734035 containerd[1835]: time="2025-11-01T00:31:33.733926591Z" level=error msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" failed" error="failed to destroy network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.734079 kubelet[3123]: E1101 00:31:33.733907 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e047d2f-1491-42f0-a675-eff64087e5dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:31:33.734079 kubelet[3123]: E1101 00:31:33.733987 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:33.734079 kubelet[3123]: E1101 00:31:33.733993 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:33.734079 kubelet[3123]: E1101 00:31:33.734001 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c"} Nov 1 00:31:33.734079 kubelet[3123]: E1101 00:31:33.734008 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b"} Nov 1 00:31:33.734186 kubelet[3123]: E1101 00:31:33.734013 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68ef77d9-c28e-4552-8ad9-f26358f8691b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.734186 kubelet[3123]: E1101 00:31:33.734022 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.734186 kubelet[3123]: E1101 00:31:33.734033 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68ef77d9-c28e-4552-8ad9-f26358f8691b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:31:33.734266 kubelet[3123]: E1101 00:31:33.734035 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:31:33.734293 containerd[1835]: time="2025-11-01T00:31:33.734228760Z" level=error msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" failed" error="failed to destroy network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.734311 kubelet[3123]: E1101 00:31:33.734300 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:33.734332 kubelet[3123]: E1101 00:31:33.734314 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c"} Nov 1 00:31:33.734332 kubelet[3123]: E1101 00:31:33.734328 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54e9db7b-cb35-4455-991d-efa82c12e14b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.734378 kubelet[3123]: E1101 00:31:33.734340 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54e9db7b-cb35-4455-991d-efa82c12e14b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f588c5579-5fbf4" podUID="54e9db7b-cb35-4455-991d-efa82c12e14b" Nov 1 00:31:33.734704 containerd[1835]: time="2025-11-01T00:31:33.734690720Z" level=error msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" failed" error="failed to destroy network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:33.734800 kubelet[3123]: E1101 00:31:33.734792 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:33.734843 kubelet[3123]: E1101 00:31:33.734803 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b"} Nov 1 00:31:33.734843 kubelet[3123]: E1101 00:31:33.734833 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:33.734893 kubelet[3123]: E1101 00:31:33.734843 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vcqfb" podUID="9a95ee66-2c19-4f09-bcfd-9c4e55da76e6" Nov 1 00:31:34.624986 systemd[1]: Created slice kubepods-besteffort-pod0245246d_bdc5_450d_b21c_5eff759295d4.slice - libcontainer container kubepods-besteffort-pod0245246d_bdc5_450d_b21c_5eff759295d4.slice. Nov 1 00:31:34.626898 containerd[1835]: time="2025-11-01T00:31:34.626877817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvbjm,Uid:0245246d-bdc5-450d-b21c-5eff759295d4,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:34.654597 containerd[1835]: time="2025-11-01T00:31:34.654568218Z" level=error msg="Failed to destroy network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:34.654805 containerd[1835]: time="2025-11-01T00:31:34.654789845Z" level=error msg="encountered an error cleaning up failed sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:34.654856 containerd[1835]: time="2025-11-01T00:31:34.654818407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvbjm,Uid:0245246d-bdc5-450d-b21c-5eff759295d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:34.655008 kubelet[3123]: E1101 00:31:34.654982 3123 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:34.655085 kubelet[3123]: E1101 00:31:34.655027 3123 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:34.655085 kubelet[3123]: E1101 00:31:34.655046 3123 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vvbjm" Nov 1 00:31:34.655184 kubelet[3123]: E1101 00:31:34.655152 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:34.656164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2-shm.mount: Deactivated successfully. Nov 1 00:31:34.723920 kubelet[3123]: I1101 00:31:34.723820 3123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:34.725184 containerd[1835]: time="2025-11-01T00:31:34.725078854Z" level=info msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" Nov 1 00:31:34.725653 containerd[1835]: time="2025-11-01T00:31:34.725585254Z" level=info msg="Ensure that sandbox f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2 in task-service has been cleanup successfully" Nov 1 00:31:34.789047 containerd[1835]: time="2025-11-01T00:31:34.788947576Z" level=error msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" failed" error="failed to destroy network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:31:34.789514 kubelet[3123]: E1101 00:31:34.789398 3123 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:34.789719 kubelet[3123]: E1101 00:31:34.789515 3123 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2"} Nov 1 00:31:34.789719 kubelet[3123]: E1101 00:31:34.789599 3123 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0245246d-bdc5-450d-b21c-5eff759295d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:31:34.789719 kubelet[3123]: E1101 00:31:34.789665 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0245246d-bdc5-450d-b21c-5eff759295d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:36.857478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268362078.mount: Deactivated successfully. Nov 1 00:31:36.874364 containerd[1835]: time="2025-11-01T00:31:36.874330561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:36.874571 containerd[1835]: time="2025-11-01T00:31:36.874551219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:31:36.874875 containerd[1835]: time="2025-11-01T00:31:36.874862758Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:36.875724 containerd[1835]: time="2025-11-01T00:31:36.875712372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:31:36.876099 containerd[1835]: time="2025-11-01T00:31:36.876083005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.163691208s" Nov 1 00:31:36.876122 containerd[1835]: time="2025-11-01T00:31:36.876102991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:31:36.880084 containerd[1835]: time="2025-11-01T00:31:36.880064112Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:31:36.885666 containerd[1835]: time="2025-11-01T00:31:36.885624224Z" level=info msg="CreateContainer within sandbox \"ec43a968a0f9ab5065af117d1a0bf1675849183d81180c5f3452adfbe2534a2c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"144b5f2305c3679e9af5dc770e440a42f3353b9e8d953bc7ddf516ee690256d1\"" Nov 1 00:31:36.885914 containerd[1835]: time="2025-11-01T00:31:36.885863067Z" level=info msg="StartContainer for \"144b5f2305c3679e9af5dc770e440a42f3353b9e8d953bc7ddf516ee690256d1\"" Nov 1 00:31:36.905409 systemd[1]: Started cri-containerd-144b5f2305c3679e9af5dc770e440a42f3353b9e8d953bc7ddf516ee690256d1.scope - libcontainer container 144b5f2305c3679e9af5dc770e440a42f3353b9e8d953bc7ddf516ee690256d1. Nov 1 00:31:36.920318 containerd[1835]: time="2025-11-01T00:31:36.920291172Z" level=info msg="StartContainer for \"144b5f2305c3679e9af5dc770e440a42f3353b9e8d953bc7ddf516ee690256d1\" returns successfully" Nov 1 00:31:36.993829 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:31:36.993888 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:31:37.039186 containerd[1835]: time="2025-11-01T00:31:37.039154257Z" level=info msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.064 [INFO][4717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.064 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" iface="eth0" netns="/var/run/netns/cni-7273ddba-9c54-8500-ea23-d2a5314f28a5" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.065 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" iface="eth0" netns="/var/run/netns/cni-7273ddba-9c54-8500-ea23-d2a5314f28a5" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.065 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" iface="eth0" netns="/var/run/netns/cni-7273ddba-9c54-8500-ea23-d2a5314f28a5" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.065 [INFO][4717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.065 [INFO][4717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.075 [INFO][4746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.075 [INFO][4746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.075 [INFO][4746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.079 [WARNING][4746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.079 [INFO][4746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.079 [INFO][4746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:37.082149 containerd[1835]: 2025-11-01 00:31:37.081 [INFO][4717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:31:37.082525 containerd[1835]: time="2025-11-01T00:31:37.082481254Z" level=info msg="TearDown network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" successfully" Nov 1 00:31:37.082525 containerd[1835]: time="2025-11-01T00:31:37.082499006Z" level=info msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" returns successfully" Nov 1 00:31:37.128984 kubelet[3123]: I1101 00:31:37.128911 3123 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-ca-bundle\") pod \"54e9db7b-cb35-4455-991d-efa82c12e14b\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " Nov 1 00:31:37.128984 kubelet[3123]: I1101 00:31:37.128940 3123 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-backend-key-pair\") pod \"54e9db7b-cb35-4455-991d-efa82c12e14b\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " Nov 1 00:31:37.128984 kubelet[3123]: I1101 00:31:37.128956 3123 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plq8j\" (UniqueName: \"kubernetes.io/projected/54e9db7b-cb35-4455-991d-efa82c12e14b-kube-api-access-plq8j\") pod \"54e9db7b-cb35-4455-991d-efa82c12e14b\" (UID: \"54e9db7b-cb35-4455-991d-efa82c12e14b\") " Nov 1 00:31:37.129355 kubelet[3123]: I1101 00:31:37.129210 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54e9db7b-cb35-4455-991d-efa82c12e14b" (UID: "54e9db7b-cb35-4455-991d-efa82c12e14b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:31:37.130568 kubelet[3123]: I1101 00:31:37.130553 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54e9db7b-cb35-4455-991d-efa82c12e14b" (UID: "54e9db7b-cb35-4455-991d-efa82c12e14b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:31:37.130621 kubelet[3123]: I1101 00:31:37.130606 3123 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e9db7b-cb35-4455-991d-efa82c12e14b-kube-api-access-plq8j" (OuterVolumeSpecName: "kube-api-access-plq8j") pod "54e9db7b-cb35-4455-991d-efa82c12e14b" (UID: "54e9db7b-cb35-4455-991d-efa82c12e14b"). InnerVolumeSpecName "kube-api-access-plq8j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:31:37.229891 kubelet[3123]: I1101 00:31:37.229834 3123 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-ca-bundle\") on node \"ci-4081.3.6-n-d37906c143\" DevicePath \"\"" Nov 1 00:31:37.229891 kubelet[3123]: I1101 00:31:37.229887 3123 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54e9db7b-cb35-4455-991d-efa82c12e14b-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-d37906c143\" DevicePath \"\"" Nov 1 00:31:37.230257 kubelet[3123]: I1101 00:31:37.229917 3123 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plq8j\" (UniqueName: \"kubernetes.io/projected/54e9db7b-cb35-4455-991d-efa82c12e14b-kube-api-access-plq8j\") on node \"ci-4081.3.6-n-d37906c143\" DevicePath \"\"" Nov 1 00:31:37.740976 systemd[1]: Removed slice kubepods-besteffort-pod54e9db7b_cb35_4455_991d_efa82c12e14b.slice - libcontainer container kubepods-besteffort-pod54e9db7b_cb35_4455_991d_efa82c12e14b.slice. Nov 1 00:31:37.748035 kubelet[3123]: I1101 00:31:37.747998 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kl2tg" podStartSLOduration=2.191827309 podStartE2EDuration="12.747984478s" podCreationTimestamp="2025-11-01 00:31:25 +0000 UTC" firstStartedPulling="2025-11-01 00:31:26.320313133 +0000 UTC m=+17.742545617" lastFinishedPulling="2025-11-01 00:31:36.876470302 +0000 UTC m=+28.298702786" observedRunningTime="2025-11-01 00:31:37.747632608 +0000 UTC m=+29.169865094" watchObservedRunningTime="2025-11-01 00:31:37.747984478 +0000 UTC m=+29.170216959" Nov 1 00:31:37.773734 systemd[1]: Created slice kubepods-besteffort-pode2d29ca0_c92c_40d7_8210_622ae9e53eeb.slice - libcontainer container kubepods-besteffort-pode2d29ca0_c92c_40d7_8210_622ae9e53eeb.slice. Nov 1 00:31:37.834788 kubelet[3123]: I1101 00:31:37.834662 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2d29ca0-c92c-40d7-8210-622ae9e53eeb-whisker-backend-key-pair\") pod \"whisker-544cf4559-wdzr2\" (UID: \"e2d29ca0-c92c-40d7-8210-622ae9e53eeb\") " pod="calico-system/whisker-544cf4559-wdzr2" Nov 1 00:31:37.835050 kubelet[3123]: I1101 00:31:37.834903 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6bt\" (UniqueName: \"kubernetes.io/projected/e2d29ca0-c92c-40d7-8210-622ae9e53eeb-kube-api-access-qv6bt\") pod \"whisker-544cf4559-wdzr2\" (UID: \"e2d29ca0-c92c-40d7-8210-622ae9e53eeb\") " pod="calico-system/whisker-544cf4559-wdzr2" Nov 1 00:31:37.835177 kubelet[3123]: I1101 00:31:37.835125 3123 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2d29ca0-c92c-40d7-8210-622ae9e53eeb-whisker-ca-bundle\") pod \"whisker-544cf4559-wdzr2\" (UID: \"e2d29ca0-c92c-40d7-8210-622ae9e53eeb\") " pod="calico-system/whisker-544cf4559-wdzr2" Nov 1 00:31:37.865681 systemd[1]: run-netns-cni\x2d7273ddba\x2d9c54\x2d8500\x2dea23\x2dd2a5314f28a5.mount: Deactivated successfully. Nov 1 00:31:37.865912 systemd[1]: var-lib-kubelet-pods-54e9db7b\x2dcb35\x2d4455\x2d991d\x2defa82c12e14b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dplq8j.mount: Deactivated successfully. Nov 1 00:31:37.866113 systemd[1]: var-lib-kubelet-pods-54e9db7b\x2dcb35\x2d4455\x2d991d\x2defa82c12e14b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:31:38.078275 containerd[1835]: time="2025-11-01T00:31:38.078226463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544cf4559-wdzr2,Uid:e2d29ca0-c92c-40d7-8210-622ae9e53eeb,Namespace:calico-system,Attempt:0,}" Nov 1 00:31:38.156423 systemd-networkd[1521]: cali5002e3f8f92: Link UP Nov 1 00:31:38.156579 systemd-networkd[1521]: cali5002e3f8f92: Gained carrier Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.092 [INFO][4775] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.099 [INFO][4775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0 whisker-544cf4559- calico-system e2d29ca0-c92c-40d7-8210-622ae9e53eeb 867 0 2025-11-01 00:31:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:544cf4559 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 whisker-544cf4559-wdzr2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5002e3f8f92 [] [] }} ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.099 [INFO][4775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.112 [INFO][4797] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" HandleID="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.112 [INFO][4797] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" HandleID="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006176b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"whisker-544cf4559-wdzr2", "timestamp":"2025-11-01 00:31:38.112223912 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.112 [INFO][4797] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.112 [INFO][4797] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.112 [INFO][4797] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.119 [INFO][4797] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.124 [INFO][4797] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.129 [INFO][4797] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.131 [INFO][4797] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.134 [INFO][4797] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.134 [INFO][4797] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.136 [INFO][4797] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.139 [INFO][4797] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.142 [INFO][4797] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.129/26] block=192.168.66.128/26 handle="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.142 [INFO][4797] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.129/26] handle="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.142 [INFO][4797] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:38.162280 containerd[1835]: 2025-11-01 00:31:38.142 [INFO][4797] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.129/26] IPv6=[] ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" HandleID="k8s-pod-network.013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.144 [INFO][4775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0", GenerateName:"whisker-544cf4559-", Namespace:"calico-system", SelfLink:"", UID:"e2d29ca0-c92c-40d7-8210-622ae9e53eeb", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544cf4559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"whisker-544cf4559-wdzr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5002e3f8f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.144 [INFO][4775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.129/32] ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.144 [INFO][4775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5002e3f8f92 ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.156 [INFO][4775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.156 [INFO][4775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0", GenerateName:"whisker-544cf4559-", Namespace:"calico-system", SelfLink:"", UID:"e2d29ca0-c92c-40d7-8210-622ae9e53eeb", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544cf4559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de", Pod:"whisker-544cf4559-wdzr2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5002e3f8f92", MAC:"96:51:bd:b9:c3:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:38.162939 containerd[1835]: 2025-11-01 00:31:38.161 [INFO][4775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de" Namespace="calico-system" Pod="whisker-544cf4559-wdzr2" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--544cf4559--wdzr2-eth0" Nov 1 00:31:38.170710 containerd[1835]: time="2025-11-01T00:31:38.170459181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:38.170710 containerd[1835]: time="2025-11-01T00:31:38.170694620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:38.170710 containerd[1835]: time="2025-11-01T00:31:38.170703771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:38.170885 containerd[1835]: time="2025-11-01T00:31:38.170764855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:38.189607 systemd[1]: Started cri-containerd-013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de.scope - libcontainer container 013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de. Nov 1 00:31:38.246083 containerd[1835]: time="2025-11-01T00:31:38.246057996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544cf4559-wdzr2,Uid:e2d29ca0-c92c-40d7-8210-622ae9e53eeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"013fb334aeff033b491cc16a764dbfd3a35f7643076ac3c595e860bbe1b357de\"" Nov 1 00:31:38.246959 containerd[1835]: time="2025-11-01T00:31:38.246946730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:31:38.256170 kernel: bpftool[5008]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:31:38.412834 systemd-networkd[1521]: vxlan.calico: Link UP Nov 1 00:31:38.412838 systemd-networkd[1521]: vxlan.calico: Gained carrier Nov 1 00:31:38.621920 kubelet[3123]: I1101 00:31:38.621873 3123 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e9db7b-cb35-4455-991d-efa82c12e14b" path="/var/lib/kubelet/pods/54e9db7b-cb35-4455-991d-efa82c12e14b/volumes" Nov 1 00:31:38.632200 containerd[1835]: time="2025-11-01T00:31:38.632178545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:38.632539 containerd[1835]: time="2025-11-01T00:31:38.632521488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:31:38.632603 containerd[1835]: time="2025-11-01T00:31:38.632541088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:31:38.632638 kubelet[3123]: E1101 00:31:38.632609 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:38.632657 kubelet[3123]: E1101 00:31:38.632644 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:38.632701 kubelet[3123]: E1101 00:31:38.632690 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:38.633042 containerd[1835]: time="2025-11-01T00:31:38.633031004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:31:38.740962 kubelet[3123]: I1101 00:31:38.740751 3123 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:31:39.006210 containerd[1835]: time="2025-11-01T00:31:39.006123991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:39.007146 containerd[1835]: time="2025-11-01T00:31:39.007054128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:31:39.007146 containerd[1835]: time="2025-11-01T00:31:39.007128187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:39.007272 kubelet[3123]: E1101 00:31:39.007221 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:39.007272 kubelet[3123]: E1101 00:31:39.007252 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:39.007334 kubelet[3123]: E1101 00:31:39.007300 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:39.007353 kubelet[3123]: E1101 00:31:39.007327 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:31:39.747971 kubelet[3123]: E1101 00:31:39.747832 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:31:39.888435 systemd-networkd[1521]: cali5002e3f8f92: Gained IPv6LL Nov 1 00:31:40.208442 systemd-networkd[1521]: vxlan.calico: Gained IPv6LL Nov 1 00:31:45.621827 containerd[1835]: time="2025-11-01T00:31:45.621720108Z" level=info msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" Nov 1 00:31:45.623224 containerd[1835]: time="2025-11-01T00:31:45.621720139Z" level=info msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" Nov 1 00:31:45.623224 containerd[1835]: time="2025-11-01T00:31:45.622330766Z" level=info msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" iface="eth0" netns="/var/run/netns/cni-0b5fd696-732f-e043-860d-a9fe2b22e976" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" iface="eth0" netns="/var/run/netns/cni-0b5fd696-732f-e043-860d-a9fe2b22e976" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" iface="eth0" netns="/var/run/netns/cni-0b5fd696-732f-e043-860d-a9fe2b22e976" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.664 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.665 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.666 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.667352 containerd[1835]: 2025-11-01 00:31:45.666 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:31:45.667631 containerd[1835]: time="2025-11-01T00:31:45.667431231Z" level=info msg="TearDown network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" successfully" Nov 1 00:31:45.667631 containerd[1835]: time="2025-11-01T00:31:45.667452561Z" level=info msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" returns successfully" Nov 1 00:31:45.668468 containerd[1835]: time="2025-11-01T00:31:45.668449958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d7s5,Uid:bd354653-92ce-413f-9189-183709f503cd,Namespace:kube-system,Attempt:1,}" Nov 1 00:31:45.669253 systemd[1]: run-netns-cni\x2d0b5fd696\x2d732f\x2de043\x2d860d\x2da9fe2b22e976.mount: Deactivated successfully. Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.650 [INFO][5169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.650 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" iface="eth0" netns="/var/run/netns/cni-5a1eabc9-98de-bd49-eb07-fcb6cb93bbc7" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.650 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" iface="eth0" netns="/var/run/netns/cni-5a1eabc9-98de-bd49-eb07-fcb6cb93bbc7" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" iface="eth0" netns="/var/run/netns/cni-5a1eabc9-98de-bd49-eb07-fcb6cb93bbc7" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.666 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.669 [WARNING][5214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.669 [INFO][5214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.670 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.671829 containerd[1835]: 2025-11-01 00:31:45.671 [INFO][5169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:31:45.672072 containerd[1835]: time="2025-11-01T00:31:45.671885164Z" level=info msg="TearDown network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" successfully" Nov 1 00:31:45.672072 containerd[1835]: time="2025-11-01T00:31:45.671898202Z" level=info msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" returns successfully" Nov 1 00:31:45.672772 containerd[1835]: time="2025-11-01T00:31:45.672759117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t7ml5,Uid:0e047d2f-1491-42f0-a675-eff64087e5dd,Namespace:calico-system,Attempt:1,}" Nov 1 00:31:45.675145 systemd[1]: run-netns-cni\x2d5a1eabc9\x2d98de\x2dbd49\x2deb07\x2dfcb6cb93bbc7.mount: Deactivated successfully. Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" iface="eth0" netns="/var/run/netns/cni-3d479c8e-3e92-ea76-8796-75fcfabb757f" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" iface="eth0" netns="/var/run/netns/cni-3d479c8e-3e92-ea76-8796-75fcfabb757f" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" iface="eth0" netns="/var/run/netns/cni-3d479c8e-3e92-ea76-8796-75fcfabb757f" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.651 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.661 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.670 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.674 [WARNING][5217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.674 [INFO][5217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.675 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.677650 containerd[1835]: 2025-11-01 00:31:45.675 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:31:45.678172 containerd[1835]: time="2025-11-01T00:31:45.677878103Z" level=info msg="TearDown network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" successfully" Nov 1 00:31:45.678172 containerd[1835]: time="2025-11-01T00:31:45.677896683Z" level=info msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" returns successfully" Nov 1 00:31:45.678980 containerd[1835]: time="2025-11-01T00:31:45.678960447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-7nj5x,Uid:3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:31:45.680649 systemd[1]: run-netns-cni\x2d3d479c8e\x2d3e92\x2dea76\x2d8796\x2d75fcfabb757f.mount: Deactivated successfully. Nov 1 00:31:45.725777 systemd-networkd[1521]: cali5987d9d7177: Link UP Nov 1 00:31:45.725915 systemd-networkd[1521]: cali5987d9d7177: Gained carrier Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.691 [INFO][5259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0 coredns-66bc5c9577- kube-system bd354653-92ce-413f-9189-183709f503cd 906 0 2025-11-01 00:31:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 coredns-66bc5c9577-6d7s5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5987d9d7177 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.691 [INFO][5259] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.703 [INFO][5327] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" HandleID="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.703 [INFO][5327] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" HandleID="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043e040), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"coredns-66bc5c9577-6d7s5", "timestamp":"2025-11-01 00:31:45.7037494 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.703 [INFO][5327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.703 [INFO][5327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.703 [INFO][5327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.708 [INFO][5327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.711 [INFO][5327] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.713 [INFO][5327] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.713 [INFO][5327] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.714 [INFO][5327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.714 [INFO][5327] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.715 [INFO][5327] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746 Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.721 [INFO][5327] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5327] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.130/26] block=192.168.66.128/26 handle="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.130/26] handle="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.732391 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5327] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.130/26] IPv6=[] ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" HandleID="k8s-pod-network.f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732782 containerd[1835]: 2025-11-01 00:31:45.725 [INFO][5259] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bd354653-92ce-413f-9189-183709f503cd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"coredns-66bc5c9577-6d7s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987d9d7177", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.732782 containerd[1835]: 2025-11-01 00:31:45.725 [INFO][5259] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.130/32] ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732782 containerd[1835]: 2025-11-01 00:31:45.725 [INFO][5259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5987d9d7177 ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732782 containerd[1835]: 2025-11-01 00:31:45.725 [INFO][5259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.732782 containerd[1835]: 2025-11-01 00:31:45.726 [INFO][5259] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bd354653-92ce-413f-9189-183709f503cd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746", Pod:"coredns-66bc5c9577-6d7s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987d9d7177", MAC:"2e:d8:a4:ab:71:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.732927 containerd[1835]: 2025-11-01 00:31:45.731 [INFO][5259] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746" Namespace="kube-system" Pod="coredns-66bc5c9577-6d7s5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:31:45.740570 containerd[1835]: time="2025-11-01T00:31:45.740257545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:45.740570 containerd[1835]: time="2025-11-01T00:31:45.740503026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:45.740570 containerd[1835]: time="2025-11-01T00:31:45.740512191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.740677 containerd[1835]: time="2025-11-01T00:31:45.740559012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.760440 systemd[1]: Started cri-containerd-f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746.scope - libcontainer container f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746. Nov 1 00:31:45.782959 containerd[1835]: time="2025-11-01T00:31:45.782936306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6d7s5,Uid:bd354653-92ce-413f-9189-183709f503cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746\"" Nov 1 00:31:45.784909 containerd[1835]: time="2025-11-01T00:31:45.784894535Z" level=info msg="CreateContainer within sandbox \"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:31:45.789114 containerd[1835]: time="2025-11-01T00:31:45.789100339Z" level=info msg="CreateContainer within sandbox \"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24c713d55b8769560029e0c1771d038a81589abe1138a64d1a28cfb90ff193aa\"" Nov 1 00:31:45.789317 containerd[1835]: time="2025-11-01T00:31:45.789303880Z" level=info msg="StartContainer for \"24c713d55b8769560029e0c1771d038a81589abe1138a64d1a28cfb90ff193aa\"" Nov 1 00:31:45.808246 systemd[1]: Started cri-containerd-24c713d55b8769560029e0c1771d038a81589abe1138a64d1a28cfb90ff193aa.scope - libcontainer container 24c713d55b8769560029e0c1771d038a81589abe1138a64d1a28cfb90ff193aa. Nov 1 00:31:45.823880 containerd[1835]: time="2025-11-01T00:31:45.823850058Z" level=info msg="StartContainer for \"24c713d55b8769560029e0c1771d038a81589abe1138a64d1a28cfb90ff193aa\" returns successfully" Nov 1 00:31:45.826713 systemd-networkd[1521]: calif93c6fc9a63: Link UP Nov 1 00:31:45.826957 systemd-networkd[1521]: calif93c6fc9a63: Gained carrier Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.693 [INFO][5270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0 goldmane-7c778bb748- calico-system 0e047d2f-1491-42f0-a675-eff64087e5dd 905 0 2025-11-01 00:31:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 goldmane-7c778bb748-t7ml5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif93c6fc9a63 [] [] }} ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.693 [INFO][5270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.706 [INFO][5334] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" HandleID="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.706 [INFO][5334] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" HandleID="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"goldmane-7c778bb748-t7ml5", "timestamp":"2025-11-01 00:31:45.706587886 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.706 [INFO][5334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.724 [INFO][5334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.808 [INFO][5334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.811 [INFO][5334] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.814 [INFO][5334] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.815 [INFO][5334] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.816 [INFO][5334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.816 [INFO][5334] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.817 [INFO][5334] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43 Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.820 [INFO][5334] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.823 [INFO][5334] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.131/26] block=192.168.66.128/26 handle="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.824 [INFO][5334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.131/26] handle="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.824 [INFO][5334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.834173 containerd[1835]: 2025-11-01 00:31:45.824 [INFO][5334] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.131/26] IPv6=[] ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" HandleID="k8s-pod-network.6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.825 [INFO][5270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0e047d2f-1491-42f0-a675-eff64087e5dd", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"goldmane-7c778bb748-t7ml5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif93c6fc9a63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.825 [INFO][5270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.131/32] ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.825 [INFO][5270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif93c6fc9a63 ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.827 [INFO][5270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.827 [INFO][5270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0e047d2f-1491-42f0-a675-eff64087e5dd", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43", Pod:"goldmane-7c778bb748-t7ml5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif93c6fc9a63", MAC:"02:9c:d7:9f:3b:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.835053 containerd[1835]: 2025-11-01 00:31:45.833 [INFO][5270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43" Namespace="calico-system" Pod="goldmane-7c778bb748-t7ml5" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:31:45.843979 containerd[1835]: time="2025-11-01T00:31:45.843737245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:45.843979 containerd[1835]: time="2025-11-01T00:31:45.843962302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:45.843979 containerd[1835]: time="2025-11-01T00:31:45.843970830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.844123 containerd[1835]: time="2025-11-01T00:31:45.844023072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.861352 systemd[1]: Started cri-containerd-6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43.scope - libcontainer container 6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43. Nov 1 00:31:45.884466 containerd[1835]: time="2025-11-01T00:31:45.884370698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t7ml5,Uid:0e047d2f-1491-42f0-a675-eff64087e5dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43\"" Nov 1 00:31:45.885460 containerd[1835]: time="2025-11-01T00:31:45.885436058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:31:45.924931 systemd-networkd[1521]: cali14253e12abb: Link UP Nov 1 00:31:45.925126 systemd-networkd[1521]: cali14253e12abb: Gained carrier Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.700 [INFO][5298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0 calico-apiserver-57458876- calico-apiserver 3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5 907 0 2025-11-01 00:31:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57458876 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 calico-apiserver-57458876-7nj5x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14253e12abb [] [] }} ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.700 [INFO][5298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.714 [INFO][5352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" HandleID="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.714 [INFO][5352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" HandleID="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000694610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-d37906c143", "pod":"calico-apiserver-57458876-7nj5x", "timestamp":"2025-11-01 00:31:45.714071613 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.714 [INFO][5352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.824 [INFO][5352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.824 [INFO][5352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.909 [INFO][5352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.911 [INFO][5352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.914 [INFO][5352] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.915 [INFO][5352] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.916 [INFO][5352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.916 [INFO][5352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.917 [INFO][5352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.919 [INFO][5352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.922 [INFO][5352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.132/26] block=192.168.66.128/26 handle="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.922 [INFO][5352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.132/26] handle="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.922 [INFO][5352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:45.932892 containerd[1835]: 2025-11-01 00:31:45.922 [INFO][5352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.132/26] IPv6=[] ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" HandleID="k8s-pod-network.f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.923 [INFO][5298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"calico-apiserver-57458876-7nj5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14253e12abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.923 [INFO][5298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.132/32] ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.923 [INFO][5298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14253e12abb ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.925 [INFO][5298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.925 [INFO][5298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c", Pod:"calico-apiserver-57458876-7nj5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14253e12abb", MAC:"b2:13:10:d5:c5:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:45.934024 containerd[1835]: 2025-11-01 00:31:45.931 [INFO][5298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-7nj5x" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:31:45.942309 containerd[1835]: time="2025-11-01T00:31:45.942256542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:45.942414 containerd[1835]: time="2025-11-01T00:31:45.942398717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:45.942447 containerd[1835]: time="2025-11-01T00:31:45.942411697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.942473 containerd[1835]: time="2025-11-01T00:31:45.942462674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:45.963782 systemd[1]: Started cri-containerd-f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c.scope - libcontainer container f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c. Nov 1 00:31:46.038868 containerd[1835]: time="2025-11-01T00:31:46.038820062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-7nj5x,Uid:3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c\"" Nov 1 00:31:46.249435 containerd[1835]: time="2025-11-01T00:31:46.249191365Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:46.249976 containerd[1835]: time="2025-11-01T00:31:46.249953228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:31:46.250040 containerd[1835]: time="2025-11-01T00:31:46.250019419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:46.250107 kubelet[3123]: E1101 00:31:46.250079 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:31:46.250400 kubelet[3123]: E1101 00:31:46.250112 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:31:46.250400 kubelet[3123]: E1101 00:31:46.250232 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:46.250400 kubelet[3123]: E1101 00:31:46.250261 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:31:46.250498 containerd[1835]: time="2025-11-01T00:31:46.250314354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:31:46.616201 containerd[1835]: time="2025-11-01T00:31:46.616074425Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:46.617261 containerd[1835]: time="2025-11-01T00:31:46.617125725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:31:46.617261 containerd[1835]: time="2025-11-01T00:31:46.617153795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:46.617442 kubelet[3123]: E1101 00:31:46.617378 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:46.617442 kubelet[3123]: E1101 00:31:46.617419 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:46.617583 kubelet[3123]: E1101 00:31:46.617488 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:46.617583 kubelet[3123]: E1101 00:31:46.617509 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:31:46.621038 containerd[1835]: time="2025-11-01T00:31:46.621024061Z" level=info msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" Nov 1 00:31:46.621103 containerd[1835]: time="2025-11-01T00:31:46.621087175Z" level=info msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" iface="eth0" netns="/var/run/netns/cni-38a62f0d-7edd-dce8-ab6e-fd3441093849" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" iface="eth0" netns="/var/run/netns/cni-38a62f0d-7edd-dce8-ab6e-fd3441093849" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" iface="eth0" netns="/var/run/netns/cni-38a62f0d-7edd-dce8-ab6e-fd3441093849" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5604] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.655 [INFO][5638] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.655 [INFO][5638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.655 [INFO][5638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.658 [WARNING][5638] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.658 [INFO][5638] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.659 [INFO][5638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:46.660867 containerd[1835]: 2025-11-01 00:31:46.660 [INFO][5604] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:31:46.661274 containerd[1835]: time="2025-11-01T00:31:46.660935422Z" level=info msg="TearDown network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" successfully" Nov 1 00:31:46.661274 containerd[1835]: time="2025-11-01T00:31:46.660952112Z" level=info msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" returns successfully" Nov 1 00:31:46.662063 containerd[1835]: time="2025-11-01T00:31:46.662048391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-8h7pk,Uid:68ef77d9-c28e-4552-8ad9-f26358f8691b,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.641 [INFO][5603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" iface="eth0" netns="/var/run/netns/cni-408a7cd9-786c-3d0c-e656-a0ad3bc3648b" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" iface="eth0" netns="/var/run/netns/cni-408a7cd9-786c-3d0c-e656-a0ad3bc3648b" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" iface="eth0" netns="/var/run/netns/cni-408a7cd9-786c-3d0c-e656-a0ad3bc3648b" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.642 [INFO][5603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.655 [INFO][5635] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.655 [INFO][5635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.659 [INFO][5635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.662 [WARNING][5635] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.662 [INFO][5635] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.664 [INFO][5635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:46.665628 containerd[1835]: 2025-11-01 00:31:46.664 [INFO][5603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:31:46.665936 containerd[1835]: time="2025-11-01T00:31:46.665691354Z" level=info msg="TearDown network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" successfully" Nov 1 00:31:46.665936 containerd[1835]: time="2025-11-01T00:31:46.665715458Z" level=info msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" returns successfully" Nov 1 00:31:46.666608 containerd[1835]: time="2025-11-01T00:31:46.666576216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvbjm,Uid:0245246d-bdc5-450d-b21c-5eff759295d4,Namespace:calico-system,Attempt:1,}" Nov 1 00:31:46.671955 systemd[1]: run-netns-cni\x2d408a7cd9\x2d786c\x2d3d0c\x2de656\x2da0ad3bc3648b.mount: Deactivated successfully. Nov 1 00:31:46.672014 systemd[1]: run-netns-cni\x2d38a62f0d\x2d7edd\x2ddce8\x2dab6e\x2dfd3441093849.mount: Deactivated successfully. Nov 1 00:31:46.720760 systemd-networkd[1521]: cali3d89e2268e1: Link UP Nov 1 00:31:46.721034 systemd-networkd[1521]: cali3d89e2268e1: Gained carrier Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.683 [INFO][5669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0 calico-apiserver-57458876- calico-apiserver 68ef77d9-c28e-4552-8ad9-f26358f8691b 932 0 2025-11-01 00:31:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57458876 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 calico-apiserver-57458876-8h7pk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3d89e2268e1 [] [] }} ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.683 [INFO][5669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.697 [INFO][5713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" HandleID="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.697 [INFO][5713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" HandleID="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e8c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-d37906c143", "pod":"calico-apiserver-57458876-8h7pk", "timestamp":"2025-11-01 00:31:46.697813772 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.697 [INFO][5713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.697 [INFO][5713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.698 [INFO][5713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.702 [INFO][5713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.706 [INFO][5713] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.709 [INFO][5713] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.710 [INFO][5713] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.712 [INFO][5713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.712 [INFO][5713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.713 [INFO][5713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01 Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.715 [INFO][5713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.133/26] block=192.168.66.128/26 handle="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.133/26] handle="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:46.726612 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.133/26] IPv6=[] ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" HandleID="k8s-pod-network.326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.719 [INFO][5669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"68ef77d9-c28e-4552-8ad9-f26358f8691b", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"calico-apiserver-57458876-8h7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89e2268e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.719 [INFO][5669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.133/32] ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.719 [INFO][5669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d89e2268e1 ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.721 [INFO][5669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.721 [INFO][5669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"68ef77d9-c28e-4552-8ad9-f26358f8691b", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01", Pod:"calico-apiserver-57458876-8h7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89e2268e1", MAC:"ee:8e:39:4f:b3:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:46.727021 containerd[1835]: 2025-11-01 00:31:46.725 [INFO][5669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01" Namespace="calico-apiserver" Pod="calico-apiserver-57458876-8h7pk" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:31:46.734508 containerd[1835]: time="2025-11-01T00:31:46.734433157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:46.734701 containerd[1835]: time="2025-11-01T00:31:46.734651573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:46.734701 containerd[1835]: time="2025-11-01T00:31:46.734671113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:46.734758 containerd[1835]: time="2025-11-01T00:31:46.734722848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:46.750304 systemd[1]: Started cri-containerd-326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01.scope - libcontainer container 326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01. Nov 1 00:31:46.766754 kubelet[3123]: E1101 00:31:46.766731 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:31:46.767242 kubelet[3123]: E1101 00:31:46.767223 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:31:46.776521 containerd[1835]: time="2025-11-01T00:31:46.776477032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57458876-8h7pk,Uid:68ef77d9-c28e-4552-8ad9-f26358f8691b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01\"" Nov 1 00:31:46.777426 containerd[1835]: time="2025-11-01T00:31:46.777407689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:31:46.785958 kubelet[3123]: I1101 00:31:46.785927 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6d7s5" podStartSLOduration=32.785917124 podStartE2EDuration="32.785917124s" podCreationTimestamp="2025-11-01 00:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:46.785824749 +0000 UTC m=+38.208057238" watchObservedRunningTime="2025-11-01 00:31:46.785917124 +0000 UTC m=+38.208149603" Nov 1 00:31:46.861693 systemd-networkd[1521]: cali9c4dba92441: Link UP Nov 1 00:31:46.862468 systemd-networkd[1521]: cali9c4dba92441: Gained carrier Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.688 [INFO][5681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0 csi-node-driver- calico-system 0245246d-bdc5-450d-b21c-5eff759295d4 931 0 2025-11-01 00:31:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 csi-node-driver-vvbjm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9c4dba92441 [] [] }} ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.688 [INFO][5681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.702 [INFO][5724] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" HandleID="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.702 [INFO][5724] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" HandleID="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"csi-node-driver-vvbjm", "timestamp":"2025-11-01 00:31:46.702662899 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.702 [INFO][5724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.718 [INFO][5724] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.804 [INFO][5724] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.813 [INFO][5724] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.822 [INFO][5724] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.826 [INFO][5724] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.831 [INFO][5724] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.831 [INFO][5724] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.834 [INFO][5724] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.841 [INFO][5724] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.852 [INFO][5724] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.134/26] block=192.168.66.128/26 handle="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.853 [INFO][5724] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.134/26] handle="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.853 [INFO][5724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:46.886138 containerd[1835]: 2025-11-01 00:31:46.853 [INFO][5724] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.134/26] IPv6=[] ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" HandleID="k8s-pod-network.2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.857 [INFO][5681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0245246d-bdc5-450d-b21c-5eff759295d4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"csi-node-driver-vvbjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c4dba92441", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.857 [INFO][5681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.134/32] ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.858 [INFO][5681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c4dba92441 ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.862 [INFO][5681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.864 [INFO][5681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0245246d-bdc5-450d-b21c-5eff759295d4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e", Pod:"csi-node-driver-vvbjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c4dba92441", MAC:"be:c8:d9:e9:53:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:46.886637 containerd[1835]: 2025-11-01 00:31:46.883 [INFO][5681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e" Namespace="calico-system" Pod="csi-node-driver-vvbjm" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:31:46.894960 containerd[1835]: time="2025-11-01T00:31:46.894744357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:46.894960 containerd[1835]: time="2025-11-01T00:31:46.894951530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:46.894960 containerd[1835]: time="2025-11-01T00:31:46.894958926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:46.895107 containerd[1835]: time="2025-11-01T00:31:46.894998595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:46.918432 systemd[1]: Started cri-containerd-2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e.scope - libcontainer container 2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e. Nov 1 00:31:46.928707 containerd[1835]: time="2025-11-01T00:31:46.928656877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vvbjm,Uid:0245246d-bdc5-450d-b21c-5eff759295d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e\"" Nov 1 00:31:47.112386 containerd[1835]: time="2025-11-01T00:31:47.112293388Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:47.113279 containerd[1835]: time="2025-11-01T00:31:47.113256385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:31:47.113341 containerd[1835]: time="2025-11-01T00:31:47.113321662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:31:47.113515 kubelet[3123]: E1101 00:31:47.113494 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:47.113563 kubelet[3123]: E1101 00:31:47.113523 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:31:47.113733 kubelet[3123]: E1101 00:31:47.113719 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:47.113765 kubelet[3123]: E1101 00:31:47.113745 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:31:47.113808 containerd[1835]: time="2025-11-01T00:31:47.113770337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:31:47.441443 systemd-networkd[1521]: cali5987d9d7177: Gained IPv6LL Nov 1 00:31:47.447654 containerd[1835]: time="2025-11-01T00:31:47.447545399Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:47.448542 containerd[1835]: time="2025-11-01T00:31:47.448447281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:31:47.448542 containerd[1835]: time="2025-11-01T00:31:47.448506463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:31:47.448739 kubelet[3123]: E1101 00:31:47.448719 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:31:47.448939 kubelet[3123]: E1101 00:31:47.448746 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:31:47.448939 kubelet[3123]: E1101 00:31:47.448788 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:47.449298 containerd[1835]: time="2025-11-01T00:31:47.449287498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:31:47.621770 containerd[1835]: time="2025-11-01T00:31:47.621658445Z" level=info msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" iface="eth0" netns="/var/run/netns/cni-78802aac-92fc-1bdc-4f08-a7512a1cadea" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" iface="eth0" netns="/var/run/netns/cni-78802aac-92fc-1bdc-4f08-a7512a1cadea" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" iface="eth0" netns="/var/run/netns/cni-78802aac-92fc-1bdc-4f08-a7512a1cadea" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.688 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.704 [INFO][5881] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.704 [INFO][5881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.704 [INFO][5881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.709 [WARNING][5881] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.709 [INFO][5881] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.710 [INFO][5881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:47.712493 containerd[1835]: 2025-11-01 00:31:47.711 [INFO][5864] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:31:47.713121 containerd[1835]: time="2025-11-01T00:31:47.712565570Z" level=info msg="TearDown network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" successfully" Nov 1 00:31:47.713121 containerd[1835]: time="2025-11-01T00:31:47.712598960Z" level=info msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" returns successfully" Nov 1 00:31:47.714326 containerd[1835]: time="2025-11-01T00:31:47.714304655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vcqfb,Uid:9a95ee66-2c19-4f09-bcfd-9c4e55da76e6,Namespace:kube-system,Attempt:1,}" Nov 1 00:31:47.714953 systemd[1]: run-netns-cni\x2d78802aac\x2d92fc\x2d1bdc\x2d4f08\x2da7512a1cadea.mount: Deactivated successfully. Nov 1 00:31:47.761232 systemd-networkd[1521]: cali14253e12abb: Gained IPv6LL Nov 1 00:31:47.763813 systemd-networkd[1521]: cali38e475a6ce0: Link UP Nov 1 00:31:47.763941 systemd-networkd[1521]: cali38e475a6ce0: Gained carrier Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.734 [INFO][5896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0 coredns-66bc5c9577- kube-system 9a95ee66-2c19-4f09-bcfd-9c4e55da76e6 964 0 2025-11-01 00:31:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 coredns-66bc5c9577-vcqfb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali38e475a6ce0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.734 [INFO][5896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.745 [INFO][5918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" HandleID="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.745 [INFO][5918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" HandleID="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000604890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"coredns-66bc5c9577-vcqfb", "timestamp":"2025-11-01 00:31:47.745244598 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.745 [INFO][5918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.745 [INFO][5918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.745 [INFO][5918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.749 [INFO][5918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.751 [INFO][5918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.753 [INFO][5918] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.754 [INFO][5918] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.756 [INFO][5918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.756 [INFO][5918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.756 [INFO][5918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6 Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.759 [INFO][5918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.762 [INFO][5918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.135/26] block=192.168.66.128/26 handle="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.762 [INFO][5918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.135/26] handle="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.762 [INFO][5918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:47.769275 containerd[1835]: 2025-11-01 00:31:47.762 [INFO][5918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.135/26] IPv6=[] ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" HandleID="k8s-pod-network.b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769720 containerd[1835]: 2025-11-01 00:31:47.763 [INFO][5896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"coredns-66bc5c9577-vcqfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e475a6ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:47.769720 containerd[1835]: 2025-11-01 00:31:47.763 [INFO][5896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.135/32] ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769720 containerd[1835]: 2025-11-01 00:31:47.763 [INFO][5896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38e475a6ce0 ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769720 containerd[1835]: 2025-11-01 00:31:47.764 [INFO][5896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.769720 containerd[1835]: 2025-11-01 00:31:47.764 [INFO][5896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6", Pod:"coredns-66bc5c9577-vcqfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e475a6ce0", MAC:"fe:54:8a:97:7b:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:47.769895 containerd[1835]: 2025-11-01 00:31:47.768 [INFO][5896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6" Namespace="kube-system" Pod="coredns-66bc5c9577-vcqfb" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:31:47.770638 kubelet[3123]: E1101 00:31:47.770620 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:31:47.770638 kubelet[3123]: E1101 00:31:47.770621 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:31:47.770733 kubelet[3123]: E1101 00:31:47.770623 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:31:47.778595 containerd[1835]: time="2025-11-01T00:31:47.778540842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:47.778595 containerd[1835]: time="2025-11-01T00:31:47.778577589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:47.778595 containerd[1835]: time="2025-11-01T00:31:47.778585188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:47.778734 containerd[1835]: time="2025-11-01T00:31:47.778644757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:47.807388 systemd[1]: Started cri-containerd-b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6.scope - libcontainer container b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6. Nov 1 00:31:47.812003 containerd[1835]: time="2025-11-01T00:31:47.811977230Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:47.812495 containerd[1835]: time="2025-11-01T00:31:47.812453308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:31:47.812541 containerd[1835]: time="2025-11-01T00:31:47.812464811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:31:47.812602 kubelet[3123]: E1101 00:31:47.812579 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:31:47.812647 kubelet[3123]: E1101 00:31:47.812609 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:31:47.812690 kubelet[3123]: E1101 00:31:47.812662 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:47.812742 kubelet[3123]: E1101 00:31:47.812686 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:47.830443 containerd[1835]: time="2025-11-01T00:31:47.830420453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vcqfb,Uid:9a95ee66-2c19-4f09-bcfd-9c4e55da76e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6\"" Nov 1 00:31:47.832313 containerd[1835]: time="2025-11-01T00:31:47.832300418Z" level=info msg="CreateContainer within sandbox \"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:31:47.836764 containerd[1835]: time="2025-11-01T00:31:47.836719309Z" level=info msg="CreateContainer within sandbox \"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"050aeaddfc7a4124dc4d77b0f14c95ba95e41aff18b035767c217df72573ed5b\"" Nov 1 00:31:47.836981 containerd[1835]: time="2025-11-01T00:31:47.836969588Z" level=info msg="StartContainer for \"050aeaddfc7a4124dc4d77b0f14c95ba95e41aff18b035767c217df72573ed5b\"" Nov 1 00:31:47.857433 systemd[1]: Started cri-containerd-050aeaddfc7a4124dc4d77b0f14c95ba95e41aff18b035767c217df72573ed5b.scope - libcontainer container 050aeaddfc7a4124dc4d77b0f14c95ba95e41aff18b035767c217df72573ed5b. Nov 1 00:31:47.869581 containerd[1835]: time="2025-11-01T00:31:47.869561115Z" level=info msg="StartContainer for \"050aeaddfc7a4124dc4d77b0f14c95ba95e41aff18b035767c217df72573ed5b\" returns successfully" Nov 1 00:31:47.889414 systemd-networkd[1521]: calif93c6fc9a63: Gained IPv6LL Nov 1 00:31:47.889614 systemd-networkd[1521]: cali3d89e2268e1: Gained IPv6LL Nov 1 00:31:48.622381 containerd[1835]: time="2025-11-01T00:31:48.622272366Z" level=info msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" Nov 1 00:31:48.656209 systemd-networkd[1521]: cali9c4dba92441: Gained IPv6LL Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.643 [INFO][6046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.643 [INFO][6046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" iface="eth0" netns="/var/run/netns/cni-96e4c2cd-91cf-bd32-5984-e27840fe6a09" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.644 [INFO][6046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" iface="eth0" netns="/var/run/netns/cni-96e4c2cd-91cf-bd32-5984-e27840fe6a09" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.644 [INFO][6046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" iface="eth0" netns="/var/run/netns/cni-96e4c2cd-91cf-bd32-5984-e27840fe6a09" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.644 [INFO][6046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.644 [INFO][6046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.653 [INFO][6069] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.653 [INFO][6069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.653 [INFO][6069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.656 [WARNING][6069] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.656 [INFO][6069] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.657 [INFO][6069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:48.659290 containerd[1835]: 2025-11-01 00:31:48.658 [INFO][6046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:31:48.659548 containerd[1835]: time="2025-11-01T00:31:48.659321974Z" level=info msg="TearDown network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" successfully" Nov 1 00:31:48.659548 containerd[1835]: time="2025-11-01T00:31:48.659338316Z" level=info msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" returns successfully" Nov 1 00:31:48.660400 containerd[1835]: time="2025-11-01T00:31:48.660385745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c65644df-6srvh,Uid:055e53bc-992b-4781-aa59-63b9452c2f8e,Namespace:calico-system,Attempt:1,}" Nov 1 00:31:48.670689 systemd[1]: run-netns-cni\x2d96e4c2cd\x2d91cf\x2dbd32\x2d5984\x2de27840fe6a09.mount: Deactivated successfully. Nov 1 00:31:48.713658 systemd-networkd[1521]: calic368a9c2d4a: Link UP Nov 1 00:31:48.713834 systemd-networkd[1521]: calic368a9c2d4a: Gained carrier Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.681 [INFO][6081] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0 calico-kube-controllers-75c65644df- calico-system 055e53bc-992b-4781-aa59-63b9452c2f8e 988 0 2025-11-01 00:31:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75c65644df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-d37906c143 calico-kube-controllers-75c65644df-6srvh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic368a9c2d4a [] [] }} ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.681 [INFO][6081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.694 [INFO][6104] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" HandleID="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.694 [INFO][6104] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" HandleID="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d37906c143", "pod":"calico-kube-controllers-75c65644df-6srvh", "timestamp":"2025-11-01 00:31:48.694723036 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d37906c143", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.695 [INFO][6104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.695 [INFO][6104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.695 [INFO][6104] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d37906c143' Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.699 [INFO][6104] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.701 [INFO][6104] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.703 [INFO][6104] ipam/ipam.go 511: Trying affinity for 192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.704 [INFO][6104] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.705 [INFO][6104] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.128/26 host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.705 [INFO][6104] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.66.128/26 handle="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.706 [INFO][6104] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1 Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.708 [INFO][6104] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.66.128/26 handle="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.711 [INFO][6104] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.66.136/26] block=192.168.66.128/26 handle="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.711 [INFO][6104] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.136/26] handle="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" host="ci-4081.3.6-n-d37906c143" Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.711 [INFO][6104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:31:48.720875 containerd[1835]: 2025-11-01 00:31:48.711 [INFO][6104] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.66.136/26] IPv6=[] ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" HandleID="k8s-pod-network.0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.712 [INFO][6081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0", GenerateName:"calico-kube-controllers-75c65644df-", Namespace:"calico-system", SelfLink:"", UID:"055e53bc-992b-4781-aa59-63b9452c2f8e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c65644df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"", Pod:"calico-kube-controllers-75c65644df-6srvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic368a9c2d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.712 [INFO][6081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.136/32] ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.712 [INFO][6081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic368a9c2d4a ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.713 [INFO][6081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.714 [INFO][6081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0", GenerateName:"calico-kube-controllers-75c65644df-", Namespace:"calico-system", SelfLink:"", UID:"055e53bc-992b-4781-aa59-63b9452c2f8e", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c65644df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1", Pod:"calico-kube-controllers-75c65644df-6srvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic368a9c2d4a", MAC:"86:e2:81:2e:7f:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:31:48.721613 containerd[1835]: 2025-11-01 00:31:48.719 [INFO][6081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1" Namespace="calico-system" Pod="calico-kube-controllers-75c65644df-6srvh" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:31:48.729448 containerd[1835]: time="2025-11-01T00:31:48.729405392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:31:48.729448 containerd[1835]: time="2025-11-01T00:31:48.729432406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:31:48.729448 containerd[1835]: time="2025-11-01T00:31:48.729439282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:48.729544 containerd[1835]: time="2025-11-01T00:31:48.729484536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:31:48.757417 systemd[1]: Started cri-containerd-0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1.scope - libcontainer container 0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1. Nov 1 00:31:48.773448 kubelet[3123]: E1101 00:31:48.773424 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:31:48.773448 kubelet[3123]: E1101 00:31:48.773432 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:31:48.795866 containerd[1835]: time="2025-11-01T00:31:48.795823253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75c65644df-6srvh,Uid:055e53bc-992b-4781-aa59-63b9452c2f8e,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1\"" Nov 1 00:31:48.796079 kubelet[3123]: I1101 00:31:48.796033 3123 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vcqfb" podStartSLOduration=34.796012898 podStartE2EDuration="34.796012898s" podCreationTimestamp="2025-11-01 00:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:31:48.795883159 +0000 UTC m=+40.218115647" watchObservedRunningTime="2025-11-01 00:31:48.796012898 +0000 UTC m=+40.218245382" Nov 1 00:31:48.796875 containerd[1835]: time="2025-11-01T00:31:48.796858304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:31:49.152458 containerd[1835]: time="2025-11-01T00:31:49.152333038Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:49.153348 containerd[1835]: time="2025-11-01T00:31:49.153269517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:31:49.153383 containerd[1835]: time="2025-11-01T00:31:49.153340642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:49.153547 kubelet[3123]: E1101 00:31:49.153488 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:31:49.153547 kubelet[3123]: E1101 00:31:49.153515 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:31:49.153616 kubelet[3123]: E1101 00:31:49.153562 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:49.153616 kubelet[3123]: E1101 00:31:49.153582 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:31:49.488583 systemd-networkd[1521]: cali38e475a6ce0: Gained IPv6LL Nov 1 00:31:49.781750 kubelet[3123]: E1101 00:31:49.781641 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:31:50.576325 systemd-networkd[1521]: calic368a9c2d4a: Gained IPv6LL Nov 1 00:31:50.781345 kubelet[3123]: E1101 00:31:50.781325 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:31:53.624228 containerd[1835]: time="2025-11-01T00:31:53.624004537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:31:53.973299 containerd[1835]: time="2025-11-01T00:31:53.973026303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:53.973925 containerd[1835]: time="2025-11-01T00:31:53.973852435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:31:53.973925 containerd[1835]: time="2025-11-01T00:31:53.973915195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:31:53.974056 kubelet[3123]: E1101 00:31:53.974039 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:53.974233 kubelet[3123]: E1101 00:31:53.974064 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:31:53.974233 kubelet[3123]: E1101 00:31:53.974114 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:53.974559 containerd[1835]: time="2025-11-01T00:31:53.974545982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:31:54.128254 kubelet[3123]: I1101 00:31:54.128187 3123 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:31:54.329126 containerd[1835]: time="2025-11-01T00:31:54.329080712Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:31:54.345856 containerd[1835]: time="2025-11-01T00:31:54.345813201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:31:54.345906 containerd[1835]: time="2025-11-01T00:31:54.345869124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:31:54.346018 kubelet[3123]: E1101 00:31:54.345995 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:54.346049 kubelet[3123]: E1101 00:31:54.346026 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:31:54.346082 kubelet[3123]: E1101 00:31:54.346072 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:31:54.346123 kubelet[3123]: E1101 00:31:54.346104 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:32:01.623666 containerd[1835]: time="2025-11-01T00:32:01.623590559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:32:01.962242 containerd[1835]: time="2025-11-01T00:32:01.961995817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:01.963000 containerd[1835]: time="2025-11-01T00:32:01.962979824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:32:01.963062 containerd[1835]: time="2025-11-01T00:32:01.963039648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:01.963174 kubelet[3123]: E1101 00:32:01.963152 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:01.963355 kubelet[3123]: E1101 00:32:01.963182 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:01.963355 kubelet[3123]: E1101 00:32:01.963244 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:01.963355 kubelet[3123]: E1101 00:32:01.963264 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:32:02.623559 containerd[1835]: time="2025-11-01T00:32:02.623428976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:32:02.967691 containerd[1835]: time="2025-11-01T00:32:02.967470390Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:02.968472 containerd[1835]: time="2025-11-01T00:32:02.968450418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:32:02.968548 containerd[1835]: time="2025-11-01T00:32:02.968535249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:32:02.968641 kubelet[3123]: E1101 00:32:02.968619 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:32:02.968767 kubelet[3123]: E1101 00:32:02.968649 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:32:02.968796 kubelet[3123]: E1101 00:32:02.968760 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:02.968874 containerd[1835]: time="2025-11-01T00:32:02.968861203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:32:03.342271 containerd[1835]: time="2025-11-01T00:32:03.342176353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:03.345725 containerd[1835]: time="2025-11-01T00:32:03.345682772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:32:03.345779 containerd[1835]: time="2025-11-01T00:32:03.345752833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:03.345852 kubelet[3123]: E1101 00:32:03.345828 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:03.345902 kubelet[3123]: E1101 00:32:03.345861 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:03.345967 kubelet[3123]: E1101 00:32:03.345951 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:03.345998 kubelet[3123]: E1101 00:32:03.345980 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:32:03.346100 containerd[1835]: time="2025-11-01T00:32:03.346081031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:32:03.696220 containerd[1835]: time="2025-11-01T00:32:03.696085723Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:03.696810 containerd[1835]: time="2025-11-01T00:32:03.696785432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:32:03.696872 containerd[1835]: time="2025-11-01T00:32:03.696853439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:32:03.696948 kubelet[3123]: E1101 00:32:03.696926 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:32:03.696992 kubelet[3123]: E1101 00:32:03.696956 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:32:03.697102 kubelet[3123]: E1101 00:32:03.697079 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:03.697175 kubelet[3123]: E1101 00:32:03.697126 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:32:03.697227 containerd[1835]: time="2025-11-01T00:32:03.697183319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:32:04.060942 containerd[1835]: time="2025-11-01T00:32:04.060881542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:04.061449 containerd[1835]: time="2025-11-01T00:32:04.061382976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:32:04.061484 containerd[1835]: time="2025-11-01T00:32:04.061443303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:04.061611 kubelet[3123]: E1101 00:32:04.061561 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:04.061611 kubelet[3123]: E1101 00:32:04.061589 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:04.061771 kubelet[3123]: E1101 00:32:04.061635 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:04.061771 kubelet[3123]: E1101 00:32:04.061655 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:32:05.621339 containerd[1835]: time="2025-11-01T00:32:05.621316676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:32:05.971613 containerd[1835]: time="2025-11-01T00:32:05.971514965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:05.972007 containerd[1835]: time="2025-11-01T00:32:05.971958697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:32:05.972060 containerd[1835]: time="2025-11-01T00:32:05.972004308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:32:05.972212 kubelet[3123]: E1101 00:32:05.972154 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:05.972212 kubelet[3123]: E1101 00:32:05.972193 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:05.972457 kubelet[3123]: E1101 00:32:05.972256 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:05.972457 kubelet[3123]: E1101 00:32:05.972285 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:32:07.623539 kubelet[3123]: E1101 00:32:07.623469 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:32:08.616599 containerd[1835]: time="2025-11-01T00:32:08.616562984Z" level=info msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.635 [WARNING][6273] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0245246d-bdc5-450d-b21c-5eff759295d4", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e", Pod:"csi-node-driver-vvbjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c4dba92441", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.635 [INFO][6273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.635 [INFO][6273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" iface="eth0" netns="" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.635 [INFO][6273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.635 [INFO][6273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.647 [INFO][6291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.647 [INFO][6291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.647 [INFO][6291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.650 [WARNING][6291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.650 [INFO][6291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.651 [INFO][6291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.653560 containerd[1835]: 2025-11-01 00:32:08.652 [INFO][6273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.653920 containerd[1835]: time="2025-11-01T00:32:08.653581915Z" level=info msg="TearDown network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" successfully" Nov 1 00:32:08.653920 containerd[1835]: time="2025-11-01T00:32:08.653599870Z" level=info msg="StopPodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" returns successfully" Nov 1 00:32:08.653964 containerd[1835]: time="2025-11-01T00:32:08.653929895Z" level=info msg="RemovePodSandbox for \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" Nov 1 00:32:08.653964 containerd[1835]: time="2025-11-01T00:32:08.653949196Z" level=info msg="Forcibly stopping sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\"" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.672 [WARNING][6319] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0245246d-bdc5-450d-b21c-5eff759295d4", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"2c0efddf6384f514703ffb10a1dd7426eef077d59e78dad06daa035620a8f40e", Pod:"csi-node-driver-vvbjm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9c4dba92441", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.672 [INFO][6319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.672 [INFO][6319] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" iface="eth0" netns="" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.672 [INFO][6319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.672 [INFO][6319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.683 [INFO][6337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.683 [INFO][6337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.683 [INFO][6337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.687 [WARNING][6337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.687 [INFO][6337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" HandleID="k8s-pod-network.f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Workload="ci--4081.3.6--n--d37906c143-k8s-csi--node--driver--vvbjm-eth0" Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.689 [INFO][6337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.690662 containerd[1835]: 2025-11-01 00:32:08.689 [INFO][6319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2" Nov 1 00:32:08.690986 containerd[1835]: time="2025-11-01T00:32:08.690693723Z" level=info msg="TearDown network for sandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" successfully" Nov 1 00:32:08.692330 containerd[1835]: time="2025-11-01T00:32:08.692266722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:08.692330 containerd[1835]: time="2025-11-01T00:32:08.692293618Z" level=info msg="RemovePodSandbox \"f849be5a989c10922f122a4705f80c746785f1b014c155c8d4b0088cbb2579a2\" returns successfully" Nov 1 00:32:08.692672 containerd[1835]: time="2025-11-01T00:32:08.692660710Z" level=info msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.710 [WARNING][6362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"68ef77d9-c28e-4552-8ad9-f26358f8691b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01", Pod:"calico-apiserver-57458876-8h7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89e2268e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.710 [INFO][6362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.710 [INFO][6362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" iface="eth0" netns="" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.710 [INFO][6362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.710 [INFO][6362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.720 [INFO][6376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.721 [INFO][6376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.721 [INFO][6376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.725 [WARNING][6376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.725 [INFO][6376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.726 [INFO][6376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.727970 containerd[1835]: 2025-11-01 00:32:08.727 [INFO][6362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.727970 containerd[1835]: time="2025-11-01T00:32:08.727966920Z" level=info msg="TearDown network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" successfully" Nov 1 00:32:08.728455 containerd[1835]: time="2025-11-01T00:32:08.727983645Z" level=info msg="StopPodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" returns successfully" Nov 1 00:32:08.728455 containerd[1835]: time="2025-11-01T00:32:08.728247118Z" level=info msg="RemovePodSandbox for \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" Nov 1 00:32:08.728455 containerd[1835]: time="2025-11-01T00:32:08.728267515Z" level=info msg="Forcibly stopping sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\"" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.746 [WARNING][6400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"68ef77d9-c28e-4552-8ad9-f26358f8691b", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"326655639423d86137ecedf5c7af165442e4b9ab42e6a407c2dff058515abd01", Pod:"calico-apiserver-57458876-8h7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d89e2268e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.747 [INFO][6400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.747 [INFO][6400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" iface="eth0" netns="" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.747 [INFO][6400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.747 [INFO][6400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.769 [INFO][6416] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.769 [INFO][6416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.769 [INFO][6416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.773 [WARNING][6416] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.773 [INFO][6416] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" HandleID="k8s-pod-network.7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--8h7pk-eth0" Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.774 [INFO][6416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.776587 containerd[1835]: 2025-11-01 00:32:08.775 [INFO][6400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c" Nov 1 00:32:08.776910 containerd[1835]: time="2025-11-01T00:32:08.776611152Z" level=info msg="TearDown network for sandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" successfully" Nov 1 00:32:08.778172 containerd[1835]: time="2025-11-01T00:32:08.778114417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:08.778172 containerd[1835]: time="2025-11-01T00:32:08.778139047Z" level=info msg="RemovePodSandbox \"7373f7d66f6b97aff4969d6d840b690183f4182eb9bc5d99b6c5862aa6fff82c\" returns successfully" Nov 1 00:32:08.778420 containerd[1835]: time="2025-11-01T00:32:08.778408300Z" level=info msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.795 [WARNING][6445] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0", GenerateName:"calico-kube-controllers-75c65644df-", Namespace:"calico-system", SelfLink:"", UID:"055e53bc-992b-4781-aa59-63b9452c2f8e", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c65644df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1", Pod:"calico-kube-controllers-75c65644df-6srvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic368a9c2d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.795 [INFO][6445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.795 [INFO][6445] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" iface="eth0" netns="" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.795 [INFO][6445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.795 [INFO][6445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.806 [INFO][6460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.806 [INFO][6460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.806 [INFO][6460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.810 [WARNING][6460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.810 [INFO][6460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.811 [INFO][6460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.812779 containerd[1835]: 2025-11-01 00:32:08.812 [INFO][6445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.813167 containerd[1835]: time="2025-11-01T00:32:08.812807144Z" level=info msg="TearDown network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" successfully" Nov 1 00:32:08.813167 containerd[1835]: time="2025-11-01T00:32:08.812828896Z" level=info msg="StopPodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" returns successfully" Nov 1 00:32:08.813167 containerd[1835]: time="2025-11-01T00:32:08.813128538Z" level=info msg="RemovePodSandbox for \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" Nov 1 00:32:08.813167 containerd[1835]: time="2025-11-01T00:32:08.813150104Z" level=info msg="Forcibly stopping sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\"" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.830 [WARNING][6486] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0", GenerateName:"calico-kube-controllers-75c65644df-", Namespace:"calico-system", SelfLink:"", UID:"055e53bc-992b-4781-aa59-63b9452c2f8e", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75c65644df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"0d1d953caca2c8df3123063e7d93d579d1886ce15982e6d6a7925139c57f17b1", Pod:"calico-kube-controllers-75c65644df-6srvh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic368a9c2d4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.830 [INFO][6486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.830 [INFO][6486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" iface="eth0" netns="" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.830 [INFO][6486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.830 [INFO][6486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.840 [INFO][6500] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.840 [INFO][6500] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.840 [INFO][6500] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.844 [WARNING][6500] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.844 [INFO][6500] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" HandleID="k8s-pod-network.664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--kube--controllers--75c65644df--6srvh-eth0" Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.845 [INFO][6500] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.847254 containerd[1835]: 2025-11-01 00:32:08.846 [INFO][6486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf" Nov 1 00:32:08.847565 containerd[1835]: time="2025-11-01T00:32:08.847282483Z" level=info msg="TearDown network for sandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" successfully" Nov 1 00:32:08.848816 containerd[1835]: time="2025-11-01T00:32:08.848802851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:08.848845 containerd[1835]: time="2025-11-01T00:32:08.848829513Z" level=info msg="RemovePodSandbox \"664e26c32bc6bee1f4c06f6b1cc03be9ac240dda8766d0da272e5eafee71b4cf\" returns successfully" Nov 1 00:32:08.849094 containerd[1835]: time="2025-11-01T00:32:08.849081480Z" level=info msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.867 [WARNING][6528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c", Pod:"calico-apiserver-57458876-7nj5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14253e12abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.867 [INFO][6528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.867 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" iface="eth0" netns="" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.867 [INFO][6528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.867 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.876 [INFO][6545] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.876 [INFO][6545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.876 [INFO][6545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.881 [WARNING][6545] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.881 [INFO][6545] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.882 [INFO][6545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.883545 containerd[1835]: 2025-11-01 00:32:08.882 [INFO][6528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.883545 containerd[1835]: time="2025-11-01T00:32:08.883503552Z" level=info msg="TearDown network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" successfully" Nov 1 00:32:08.883545 containerd[1835]: time="2025-11-01T00:32:08.883519711Z" level=info msg="StopPodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" returns successfully" Nov 1 00:32:08.883860 containerd[1835]: time="2025-11-01T00:32:08.883824665Z" level=info msg="RemovePodSandbox for \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" Nov 1 00:32:08.883860 containerd[1835]: time="2025-11-01T00:32:08.883839757Z" level=info msg="Forcibly stopping sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\"" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.901 [WARNING][6569] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0", GenerateName:"calico-apiserver-57458876-", Namespace:"calico-apiserver", SelfLink:"", UID:"3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57458876", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f7132dbf330d2e6ab7bab5dba953d6616f5eda88d71c5aa0c45bdc1b8f5f061c", Pod:"calico-apiserver-57458876-7nj5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14253e12abb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.901 [INFO][6569] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.901 [INFO][6569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" iface="eth0" netns="" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.901 [INFO][6569] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.901 [INFO][6569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.911 [INFO][6586] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.911 [INFO][6586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.911 [INFO][6586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.915 [WARNING][6586] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.915 [INFO][6586] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" HandleID="k8s-pod-network.0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Workload="ci--4081.3.6--n--d37906c143-k8s-calico--apiserver--57458876--7nj5x-eth0" Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.916 [INFO][6586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.918184 containerd[1835]: 2025-11-01 00:32:08.917 [INFO][6569] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b" Nov 1 00:32:08.918507 containerd[1835]: time="2025-11-01T00:32:08.918209714Z" level=info msg="TearDown network for sandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" successfully" Nov 1 00:32:08.919577 containerd[1835]: time="2025-11-01T00:32:08.919561747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:08.919617 containerd[1835]: time="2025-11-01T00:32:08.919590454Z" level=info msg="RemovePodSandbox \"0de65faf2b2975a3e4aa5a1d57dc3e0a8efb715240c19d0a36b2523ca5ff6c5b\" returns successfully" Nov 1 00:32:08.919851 containerd[1835]: time="2025-11-01T00:32:08.919838866Z" level=info msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.937 [WARNING][6613] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.937 [INFO][6613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.937 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" iface="eth0" netns="" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.937 [INFO][6613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.937 [INFO][6613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.947 [INFO][6629] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.947 [INFO][6629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.947 [INFO][6629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.951 [WARNING][6629] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.951 [INFO][6629] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.952 [INFO][6629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.954122 containerd[1835]: 2025-11-01 00:32:08.953 [INFO][6613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.954539 containerd[1835]: time="2025-11-01T00:32:08.954146526Z" level=info msg="TearDown network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" successfully" Nov 1 00:32:08.954539 containerd[1835]: time="2025-11-01T00:32:08.954162465Z" level=info msg="StopPodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" returns successfully" Nov 1 00:32:08.954539 containerd[1835]: time="2025-11-01T00:32:08.954452652Z" level=info msg="RemovePodSandbox for \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" Nov 1 00:32:08.954539 containerd[1835]: time="2025-11-01T00:32:08.954468090Z" level=info msg="Forcibly stopping sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\"" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.971 [WARNING][6649] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" WorkloadEndpoint="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.971 [INFO][6649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.971 [INFO][6649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" iface="eth0" netns="" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.971 [INFO][6649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.971 [INFO][6649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.981 [INFO][6664] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.981 [INFO][6664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.981 [INFO][6664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.985 [WARNING][6664] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.985 [INFO][6664] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" HandleID="k8s-pod-network.d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Workload="ci--4081.3.6--n--d37906c143-k8s-whisker--6f588c5579--5fbf4-eth0" Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.986 [INFO][6664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:08.988331 containerd[1835]: 2025-11-01 00:32:08.987 [INFO][6649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c" Nov 1 00:32:08.988587 containerd[1835]: time="2025-11-01T00:32:08.988355541Z" level=info msg="TearDown network for sandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" successfully" Nov 1 00:32:08.989934 containerd[1835]: time="2025-11-01T00:32:08.989889705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:08.989934 containerd[1835]: time="2025-11-01T00:32:08.989917932Z" level=info msg="RemovePodSandbox \"d5854225c91f83a6e7192bb1af2fc3606cfbbf5845657c4e02c3f875d814c18c\" returns successfully" Nov 1 00:32:08.990245 containerd[1835]: time="2025-11-01T00:32:08.990211676Z" level=info msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.008 [WARNING][6689] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6", Pod:"coredns-66bc5c9577-vcqfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e475a6ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.009 [INFO][6689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.009 [INFO][6689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" iface="eth0" netns="" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.009 [INFO][6689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.009 [INFO][6689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.019 [INFO][6706] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.019 [INFO][6706] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.019 [INFO][6706] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.023 [WARNING][6706] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.023 [INFO][6706] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.024 [INFO][6706] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.025799 containerd[1835]: 2025-11-01 00:32:09.025 [INFO][6689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.026132 containerd[1835]: time="2025-11-01T00:32:09.025801918Z" level=info msg="TearDown network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" successfully" Nov 1 00:32:09.026132 containerd[1835]: time="2025-11-01T00:32:09.025823835Z" level=info msg="StopPodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" returns successfully" Nov 1 00:32:09.026132 containerd[1835]: time="2025-11-01T00:32:09.026077111Z" level=info msg="RemovePodSandbox for \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" Nov 1 00:32:09.026132 containerd[1835]: time="2025-11-01T00:32:09.026097006Z" level=info msg="Forcibly stopping sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\"" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.043 [WARNING][6729] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"9a95ee66-2c19-4f09-bcfd-9c4e55da76e6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"b22ec343cefb8b41f918c8fc8320743fc0400807eb2fb24dce88c2f8f1457ed6", Pod:"coredns-66bc5c9577-vcqfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali38e475a6ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.043 [INFO][6729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.043 [INFO][6729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" iface="eth0" netns="" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.043 [INFO][6729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.043 [INFO][6729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.054 [INFO][6745] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.054 [INFO][6745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.054 [INFO][6745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.058 [WARNING][6745] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.058 [INFO][6745] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" HandleID="k8s-pod-network.666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--vcqfb-eth0" Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.060 [INFO][6745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.062048 containerd[1835]: 2025-11-01 00:32:09.061 [INFO][6729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b" Nov 1 00:32:09.062399 containerd[1835]: time="2025-11-01T00:32:09.062077565Z" level=info msg="TearDown network for sandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" successfully" Nov 1 00:32:09.063493 containerd[1835]: time="2025-11-01T00:32:09.063452185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:09.063493 containerd[1835]: time="2025-11-01T00:32:09.063476644Z" level=info msg="RemovePodSandbox \"666c42db1a1fa64cb26ba2e8d6f11128b85a3da0be8b25361d1516a0819b894b\" returns successfully" Nov 1 00:32:09.063780 containerd[1835]: time="2025-11-01T00:32:09.063741530Z" level=info msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.081 [WARNING][6770] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0e047d2f-1491-42f0-a675-eff64087e5dd", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43", Pod:"goldmane-7c778bb748-t7ml5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif93c6fc9a63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.081 [INFO][6770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.081 [INFO][6770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" iface="eth0" netns="" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.081 [INFO][6770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.081 [INFO][6770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.091 [INFO][6789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.091 [INFO][6789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.091 [INFO][6789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.095 [WARNING][6789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.095 [INFO][6789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.096 [INFO][6789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.098110 containerd[1835]: 2025-11-01 00:32:09.097 [INFO][6770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.098429 containerd[1835]: time="2025-11-01T00:32:09.098116739Z" level=info msg="TearDown network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" successfully" Nov 1 00:32:09.098429 containerd[1835]: time="2025-11-01T00:32:09.098132734Z" level=info msg="StopPodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" returns successfully" Nov 1 00:32:09.098474 containerd[1835]: time="2025-11-01T00:32:09.098429475Z" level=info msg="RemovePodSandbox for \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" Nov 1 00:32:09.098474 containerd[1835]: time="2025-11-01T00:32:09.098447342Z" level=info msg="Forcibly stopping sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\"" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.116 [WARNING][6816] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0e047d2f-1491-42f0-a675-eff64087e5dd", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"6a04036bf34c1c8547e9bf5134bb66fcfa4a03cf1c647c80bf429c55fd5e5b43", Pod:"goldmane-7c778bb748-t7ml5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif93c6fc9a63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.116 [INFO][6816] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.116 [INFO][6816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" iface="eth0" netns="" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.116 [INFO][6816] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.116 [INFO][6816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.126 [INFO][6834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.126 [INFO][6834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.126 [INFO][6834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.130 [WARNING][6834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.130 [INFO][6834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" HandleID="k8s-pod-network.60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Workload="ci--4081.3.6--n--d37906c143-k8s-goldmane--7c778bb748--t7ml5-eth0" Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.131 [INFO][6834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.133182 containerd[1835]: 2025-11-01 00:32:09.132 [INFO][6816] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9" Nov 1 00:32:09.133182 containerd[1835]: time="2025-11-01T00:32:09.133171274Z" level=info msg="TearDown network for sandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" successfully" Nov 1 00:32:09.152173 containerd[1835]: time="2025-11-01T00:32:09.148662762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:09.152173 containerd[1835]: time="2025-11-01T00:32:09.148730416Z" level=info msg="RemovePodSandbox \"60fb94313ea774067dd97b731631fd058225ad779374190ba5ad777fc9f762c9\" returns successfully" Nov 1 00:32:09.152173 containerd[1835]: time="2025-11-01T00:32:09.149297320Z" level=info msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.172 [WARNING][6862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bd354653-92ce-413f-9189-183709f503cd", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746", Pod:"coredns-66bc5c9577-6d7s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987d9d7177", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.172 [INFO][6862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.172 [INFO][6862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" iface="eth0" netns="" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.172 [INFO][6862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.173 [INFO][6862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.184 [INFO][6878] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.184 [INFO][6878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.184 [INFO][6878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.187 [WARNING][6878] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.187 [INFO][6878] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.188 [INFO][6878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.189801 containerd[1835]: 2025-11-01 00:32:09.189 [INFO][6862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.190089 containerd[1835]: time="2025-11-01T00:32:09.189800869Z" level=info msg="TearDown network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" successfully" Nov 1 00:32:09.190089 containerd[1835]: time="2025-11-01T00:32:09.189820340Z" level=info msg="StopPodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" returns successfully" Nov 1 00:32:09.190089 containerd[1835]: time="2025-11-01T00:32:09.190068559Z" level=info msg="RemovePodSandbox for \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" Nov 1 00:32:09.190089 containerd[1835]: time="2025-11-01T00:32:09.190084214Z" level=info msg="Forcibly stopping sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\"" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.207 [WARNING][6901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bd354653-92ce-413f-9189-183709f503cd", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 31, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d37906c143", ContainerID:"f467345a09fcdc0ee84e1f54d8fecde9f15710d975805aa453f258e26e44a746", Pod:"coredns-66bc5c9577-6d7s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987d9d7177", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.207 [INFO][6901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.207 [INFO][6901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" iface="eth0" netns="" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.207 [INFO][6901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.207 [INFO][6901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.217 [INFO][6915] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.217 [INFO][6915] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.217 [INFO][6915] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.221 [WARNING][6915] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.221 [INFO][6915] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" HandleID="k8s-pod-network.4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Workload="ci--4081.3.6--n--d37906c143-k8s-coredns--66bc5c9577--6d7s5-eth0" Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.222 [INFO][6915] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:32:09.223791 containerd[1835]: 2025-11-01 00:32:09.223 [INFO][6901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e" Nov 1 00:32:09.224080 containerd[1835]: time="2025-11-01T00:32:09.223817944Z" level=info msg="TearDown network for sandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" successfully" Nov 1 00:32:09.258909 containerd[1835]: time="2025-11-01T00:32:09.258882220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:32:09.258999 containerd[1835]: time="2025-11-01T00:32:09.258923888Z" level=info msg="RemovePodSandbox \"4ca5ae4bbde270de3fab875a5e2f60456e5dd1b2597bd274ab7bb8b4aa88ab0e\" returns successfully" Nov 1 00:32:13.621423 kubelet[3123]: E1101 00:32:13.621366 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:32:14.622586 kubelet[3123]: E1101 00:32:14.622502 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:32:16.622688 kubelet[3123]: E1101 00:32:16.622561 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:32:18.621701 kubelet[3123]: E1101 00:32:18.621651 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:32:18.621983 kubelet[3123]: E1101 00:32:18.621896 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:32:22.623449 containerd[1835]: time="2025-11-01T00:32:22.623359338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:32:23.038756 containerd[1835]: time="2025-11-01T00:32:23.038631945Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:23.044790 containerd[1835]: time="2025-11-01T00:32:23.044718012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:32:23.044841 containerd[1835]: time="2025-11-01T00:32:23.044775126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:32:23.044951 kubelet[3123]: E1101 00:32:23.044891 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:32:23.044951 kubelet[3123]: E1101 00:32:23.044928 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:32:23.045252 kubelet[3123]: E1101 00:32:23.044992 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:23.045490 containerd[1835]: time="2025-11-01T00:32:23.045456338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:32:23.421212 containerd[1835]: time="2025-11-01T00:32:23.421103987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:23.421784 containerd[1835]: time="2025-11-01T00:32:23.421704743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:32:23.421837 containerd[1835]: time="2025-11-01T00:32:23.421777557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:32:23.421946 kubelet[3123]: E1101 00:32:23.421895 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:32:23.421946 kubelet[3123]: E1101 00:32:23.421926 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:32:23.422011 kubelet[3123]: E1101 00:32:23.421973 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:23.422011 kubelet[3123]: E1101 00:32:23.421999 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:32:26.621752 containerd[1835]: time="2025-11-01T00:32:26.621722786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:32:26.987295 containerd[1835]: time="2025-11-01T00:32:26.987022986Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:26.988101 containerd[1835]: time="2025-11-01T00:32:26.988069235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:32:26.988155 containerd[1835]: time="2025-11-01T00:32:26.988102614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:26.988282 kubelet[3123]: E1101 00:32:26.988227 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:26.988282 kubelet[3123]: E1101 00:32:26.988259 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:26.988496 kubelet[3123]: E1101 00:32:26.988349 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:26.988496 kubelet[3123]: E1101 00:32:26.988375 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:32:26.988554 containerd[1835]: time="2025-11-01T00:32:26.988467519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:32:27.344147 containerd[1835]: time="2025-11-01T00:32:27.343988147Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:27.344903 containerd[1835]: time="2025-11-01T00:32:27.344831324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:32:27.344943 containerd[1835]: time="2025-11-01T00:32:27.344902793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:27.345012 kubelet[3123]: E1101 00:32:27.344992 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:27.345054 kubelet[3123]: E1101 00:32:27.345019 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:32:27.345078 kubelet[3123]: E1101 00:32:27.345063 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:27.345107 kubelet[3123]: E1101 00:32:27.345082 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:32:27.623290 containerd[1835]: time="2025-11-01T00:32:27.623027661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:32:27.965121 containerd[1835]: time="2025-11-01T00:32:27.964887494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:27.965898 containerd[1835]: time="2025-11-01T00:32:27.965869769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:32:27.965952 containerd[1835]: time="2025-11-01T00:32:27.965933292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:32:27.966076 kubelet[3123]: E1101 00:32:27.966056 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:27.966136 kubelet[3123]: E1101 00:32:27.966082 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:32:27.966164 kubelet[3123]: E1101 00:32:27.966151 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:27.966187 kubelet[3123]: E1101 00:32:27.966173 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:32:31.621028 containerd[1835]: time="2025-11-01T00:32:31.621001107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:32:31.962553 containerd[1835]: time="2025-11-01T00:32:31.962324735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:31.963221 containerd[1835]: time="2025-11-01T00:32:31.963198500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:32:31.963282 containerd[1835]: time="2025-11-01T00:32:31.963266556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:32:31.963375 kubelet[3123]: E1101 00:32:31.963353 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:32:31.963597 kubelet[3123]: E1101 00:32:31.963384 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:32:31.963597 kubelet[3123]: E1101 00:32:31.963470 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:31.963666 containerd[1835]: time="2025-11-01T00:32:31.963607024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:32:32.496750 containerd[1835]: time="2025-11-01T00:32:32.496669494Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:32.497702 containerd[1835]: time="2025-11-01T00:32:32.497624102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:32:32.497702 containerd[1835]: time="2025-11-01T00:32:32.497689374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:32:32.497816 kubelet[3123]: E1101 00:32:32.497794 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:32.497848 kubelet[3123]: E1101 00:32:32.497824 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:32:32.497958 kubelet[3123]: E1101 00:32:32.497942 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:32.498013 kubelet[3123]: E1101 00:32:32.497970 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:32:32.498045 containerd[1835]: time="2025-11-01T00:32:32.498002862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:32:32.844408 containerd[1835]: time="2025-11-01T00:32:32.844349620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:32:32.844919 containerd[1835]: time="2025-11-01T00:32:32.844895364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:32:32.844963 containerd[1835]: time="2025-11-01T00:32:32.844918263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:32:32.845052 kubelet[3123]: E1101 00:32:32.845025 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:32:32.845081 kubelet[3123]: E1101 00:32:32.845061 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:32:32.845137 kubelet[3123]: E1101 00:32:32.845124 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:32:32.845184 kubelet[3123]: E1101 00:32:32.845160 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:32:35.623746 kubelet[3123]: E1101 00:32:35.623674 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:32:39.621017 kubelet[3123]: E1101 00:32:39.620996 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:32:39.621017 kubelet[3123]: E1101 00:32:39.621016 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:32:42.621171 kubelet[3123]: E1101 00:32:42.621141 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:32:43.623919 kubelet[3123]: E1101 00:32:43.623836 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:32:46.623528 kubelet[3123]: E1101 00:32:46.623438 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:32:49.623893 kubelet[3123]: E1101 00:32:49.623790 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:32:52.622897 kubelet[3123]: E1101 00:32:52.622752 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:32:53.620946 kubelet[3123]: E1101 00:32:53.620919 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:32:54.621840 kubelet[3123]: E1101 00:32:54.621760 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:32:58.621665 kubelet[3123]: E1101 00:32:58.621605 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:33:00.621401 kubelet[3123]: E1101 00:33:00.621368 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:33:01.621064 kubelet[3123]: E1101 00:33:01.621029 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:33:03.621924 kubelet[3123]: E1101 00:33:03.621863 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:33:05.623073 kubelet[3123]: E1101 00:33:05.622978 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:33:08.625390 containerd[1835]: time="2025-11-01T00:33:08.625309631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:33:08.983030 containerd[1835]: time="2025-11-01T00:33:08.982782577Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:08.983851 containerd[1835]: time="2025-11-01T00:33:08.983778776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:33:08.983894 containerd[1835]: time="2025-11-01T00:33:08.983844876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:33:08.984003 kubelet[3123]: E1101 00:33:08.983981 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:33:08.984201 kubelet[3123]: E1101 00:33:08.984010 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:33:08.984201 kubelet[3123]: E1101 00:33:08.984064 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:08.984201 kubelet[3123]: E1101 00:33:08.984085 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:33:09.621725 kubelet[3123]: E1101 00:33:09.621695 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:33:13.623954 containerd[1835]: time="2025-11-01T00:33:13.623870499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:33:13.970153 containerd[1835]: time="2025-11-01T00:33:13.969872740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:13.970820 containerd[1835]: time="2025-11-01T00:33:13.970755044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:33:13.970857 containerd[1835]: time="2025-11-01T00:33:13.970821170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:33:13.970946 kubelet[3123]: E1101 00:33:13.970894 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:33:13.970946 kubelet[3123]: E1101 00:33:13.970923 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:33:13.971152 kubelet[3123]: E1101 00:33:13.970977 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:13.971152 kubelet[3123]: E1101 00:33:13.970998 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:33:15.622974 containerd[1835]: time="2025-11-01T00:33:15.622877882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:33:15.958423 containerd[1835]: time="2025-11-01T00:33:15.958172766Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:15.959198 containerd[1835]: time="2025-11-01T00:33:15.959172568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:33:15.959270 containerd[1835]: time="2025-11-01T00:33:15.959244254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:33:15.959420 kubelet[3123]: E1101 00:33:15.959368 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:33:15.959420 kubelet[3123]: E1101 00:33:15.959397 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:33:15.959727 kubelet[3123]: E1101 00:33:15.959449 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:15.960030 containerd[1835]: time="2025-11-01T00:33:15.960018187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:33:16.327734 containerd[1835]: time="2025-11-01T00:33:16.327679338Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:16.328113 containerd[1835]: time="2025-11-01T00:33:16.328061949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:33:16.328154 containerd[1835]: time="2025-11-01T00:33:16.328099161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:33:16.328237 kubelet[3123]: E1101 00:33:16.328193 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:33:16.328237 kubelet[3123]: E1101 00:33:16.328225 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:33:16.328297 kubelet[3123]: E1101 00:33:16.328279 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:16.328321 kubelet[3123]: E1101 00:33:16.328304 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:33:17.621780 containerd[1835]: time="2025-11-01T00:33:17.621748336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:33:17.963594 containerd[1835]: time="2025-11-01T00:33:17.963496137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:17.964071 containerd[1835]: time="2025-11-01T00:33:17.964050301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:33:17.964125 containerd[1835]: time="2025-11-01T00:33:17.964100424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:33:17.964265 kubelet[3123]: E1101 00:33:17.964210 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:33:17.964265 kubelet[3123]: E1101 00:33:17.964241 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:33:17.964465 kubelet[3123]: E1101 00:33:17.964296 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:17.964465 kubelet[3123]: E1101 00:33:17.964317 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:33:19.621512 containerd[1835]: time="2025-11-01T00:33:19.621453960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:33:19.991044 containerd[1835]: time="2025-11-01T00:33:19.990787722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:19.991862 containerd[1835]: time="2025-11-01T00:33:19.991789855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:33:19.991903 containerd[1835]: time="2025-11-01T00:33:19.991860598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:33:19.991998 kubelet[3123]: E1101 00:33:19.991976 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:33:19.992173 kubelet[3123]: E1101 00:33:19.992021 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:33:19.992173 kubelet[3123]: E1101 00:33:19.992072 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:19.992173 kubelet[3123]: E1101 00:33:19.992098 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:33:22.630665 kubelet[3123]: E1101 00:33:22.630567 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:33:22.631992 containerd[1835]: time="2025-11-01T00:33:22.631342850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:33:22.975125 containerd[1835]: time="2025-11-01T00:33:22.974818330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:22.975895 containerd[1835]: time="2025-11-01T00:33:22.975823388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:33:22.975936 containerd[1835]: time="2025-11-01T00:33:22.975891806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:33:22.976041 kubelet[3123]: E1101 00:33:22.975985 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:33:22.976041 kubelet[3123]: E1101 00:33:22.976017 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:33:22.976106 kubelet[3123]: E1101 00:33:22.976065 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:22.976521 containerd[1835]: time="2025-11-01T00:33:22.976481986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:33:23.347954 containerd[1835]: time="2025-11-01T00:33:23.347848634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:33:23.348826 containerd[1835]: time="2025-11-01T00:33:23.348795907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:33:23.348887 containerd[1835]: time="2025-11-01T00:33:23.348855001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:33:23.348986 kubelet[3123]: E1101 00:33:23.348962 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:33:23.349034 kubelet[3123]: E1101 00:33:23.348994 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:33:23.349070 kubelet[3123]: E1101 00:33:23.349040 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:33:23.349142 kubelet[3123]: E1101 00:33:23.349076 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:33:28.624607 kubelet[3123]: E1101 00:33:28.624507 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:33:30.621623 kubelet[3123]: E1101 00:33:30.621588 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:33:30.622074 kubelet[3123]: E1101 00:33:30.621918 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:33:31.622390 kubelet[3123]: E1101 00:33:31.622298 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:33:36.623611 kubelet[3123]: E1101 00:33:36.623522 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:33:36.625261 kubelet[3123]: E1101 00:33:36.624619 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:33:42.622847 kubelet[3123]: E1101 00:33:42.622745 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:33:43.621818 kubelet[3123]: E1101 00:33:43.621776 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:33:43.622052 kubelet[3123]: E1101 00:33:43.622031 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:33:44.625557 kubelet[3123]: E1101 00:33:44.625523 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:33:49.622071 kubelet[3123]: E1101 00:33:49.621742 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:33:51.622131 kubelet[3123]: E1101 00:33:51.622106 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:33:54.622032 kubelet[3123]: E1101 00:33:54.621999 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:33:55.622618 kubelet[3123]: E1101 00:33:55.622551 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:33:56.621287 kubelet[3123]: E1101 00:33:56.621258 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:33:58.623124 kubelet[3123]: E1101 00:33:58.623024 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:34:00.623262 kubelet[3123]: E1101 00:34:00.623129 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:34:02.622233 kubelet[3123]: E1101 00:34:02.622153 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:34:06.621299 kubelet[3123]: E1101 00:34:06.621269 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:34:10.622905 kubelet[3123]: E1101 00:34:10.622828 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:34:10.622905 kubelet[3123]: E1101 00:34:10.622839 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:34:11.622875 kubelet[3123]: E1101 00:34:11.622754 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:34:11.622875 kubelet[3123]: E1101 00:34:11.622801 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:34:13.621479 kubelet[3123]: E1101 00:34:13.621447 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:34:18.622054 kubelet[3123]: E1101 00:34:18.622030 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:34:21.621631 kubelet[3123]: E1101 00:34:21.621584 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:34:23.620717 kubelet[3123]: E1101 00:34:23.620661 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:34:25.622845 kubelet[3123]: E1101 00:34:25.622727 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:34:25.622845 kubelet[3123]: E1101 00:34:25.622757 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:34:28.624552 kubelet[3123]: E1101 00:34:28.624472 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:34:29.622166 kubelet[3123]: E1101 00:34:29.622111 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:34:36.624304 containerd[1835]: time="2025-11-01T00:34:36.624203513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:34:36.990563 containerd[1835]: time="2025-11-01T00:34:36.990300333Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:36.991192 containerd[1835]: time="2025-11-01T00:34:36.991131227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:34:36.991258 containerd[1835]: time="2025-11-01T00:34:36.991201106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:34:36.991394 kubelet[3123]: E1101 00:34:36.991339 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:34:36.991394 kubelet[3123]: E1101 00:34:36.991370 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:34:36.991661 kubelet[3123]: E1101 00:34:36.991473 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:36.991661 kubelet[3123]: E1101 00:34:36.991500 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:34:36.991734 containerd[1835]: time="2025-11-01T00:34:36.991600717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:34:37.335007 containerd[1835]: time="2025-11-01T00:34:37.334905827Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:37.335735 containerd[1835]: time="2025-11-01T00:34:37.335657966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:34:37.335769 containerd[1835]: time="2025-11-01T00:34:37.335729018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:34:37.335915 kubelet[3123]: E1101 00:34:37.335854 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:34:37.335915 kubelet[3123]: E1101 00:34:37.335888 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:34:37.335980 kubelet[3123]: E1101 00:34:37.335956 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:37.336000 kubelet[3123]: E1101 00:34:37.335986 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:34:37.621565 kubelet[3123]: E1101 00:34:37.621476 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:34:38.624004 kubelet[3123]: E1101 00:34:38.623923 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:34:39.621859 kubelet[3123]: E1101 00:34:39.621816 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:34:40.623595 containerd[1835]: time="2025-11-01T00:34:40.623511518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:34:40.981183 containerd[1835]: time="2025-11-01T00:34:40.981049180Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:40.981646 containerd[1835]: time="2025-11-01T00:34:40.981626528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:34:40.981690 containerd[1835]: time="2025-11-01T00:34:40.981671508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:34:40.981816 kubelet[3123]: E1101 00:34:40.981793 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:34:40.982023 kubelet[3123]: E1101 00:34:40.981824 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:34:40.982023 kubelet[3123]: E1101 00:34:40.981877 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:40.982483 containerd[1835]: time="2025-11-01T00:34:40.982439829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:34:41.322704 containerd[1835]: time="2025-11-01T00:34:41.322626685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:41.323396 containerd[1835]: time="2025-11-01T00:34:41.323372485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:34:41.323461 containerd[1835]: time="2025-11-01T00:34:41.323440382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:34:41.323549 kubelet[3123]: E1101 00:34:41.323526 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:34:41.323578 kubelet[3123]: E1101 00:34:41.323559 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:34:41.323617 kubelet[3123]: E1101 00:34:41.323609 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:41.323646 kubelet[3123]: E1101 00:34:41.323634 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:34:49.623384 containerd[1835]: time="2025-11-01T00:34:49.623234383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:34:49.997403 containerd[1835]: time="2025-11-01T00:34:49.997206800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:49.997978 containerd[1835]: time="2025-11-01T00:34:49.997891857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:34:49.997978 containerd[1835]: time="2025-11-01T00:34:49.997959657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:34:49.998152 kubelet[3123]: E1101 00:34:49.998076 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:34:49.998152 kubelet[3123]: E1101 00:34:49.998131 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:34:49.998442 kubelet[3123]: E1101 00:34:49.998185 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:49.998442 kubelet[3123]: E1101 00:34:49.998206 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:34:51.622891 kubelet[3123]: E1101 00:34:51.622784 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:34:51.624155 kubelet[3123]: E1101 00:34:51.623207 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:34:51.624506 containerd[1835]: time="2025-11-01T00:34:51.623738548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:34:51.959614 containerd[1835]: time="2025-11-01T00:34:51.959348619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:51.960286 containerd[1835]: time="2025-11-01T00:34:51.960171081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:34:51.960286 containerd[1835]: time="2025-11-01T00:34:51.960242635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:34:51.960404 kubelet[3123]: E1101 00:34:51.960358 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:34:51.960404 kubelet[3123]: E1101 00:34:51.960386 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:34:51.960465 kubelet[3123]: E1101 00:34:51.960428 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:51.960926 containerd[1835]: time="2025-11-01T00:34:51.960873486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:34:52.299935 containerd[1835]: time="2025-11-01T00:34:52.299882944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:52.300354 containerd[1835]: time="2025-11-01T00:34:52.300303361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:34:52.300354 containerd[1835]: time="2025-11-01T00:34:52.300338610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:34:52.300504 kubelet[3123]: E1101 00:34:52.300450 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:34:52.300504 kubelet[3123]: E1101 00:34:52.300479 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:34:52.300566 kubelet[3123]: E1101 00:34:52.300525 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:52.300566 kubelet[3123]: E1101 00:34:52.300554 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:34:52.624400 containerd[1835]: time="2025-11-01T00:34:52.624171458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:34:52.960610 containerd[1835]: time="2025-11-01T00:34:52.960364558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:34:52.961200 containerd[1835]: time="2025-11-01T00:34:52.961148326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:34:52.961276 containerd[1835]: time="2025-11-01T00:34:52.961229290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:34:52.961363 kubelet[3123]: E1101 00:34:52.961344 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:34:52.961487 kubelet[3123]: E1101 00:34:52.961369 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:34:52.961487 kubelet[3123]: E1101 00:34:52.961410 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:34:52.961487 kubelet[3123]: E1101 00:34:52.961428 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:34:56.624294 kubelet[3123]: E1101 00:34:56.624053 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:35:02.623198 kubelet[3123]: E1101 00:35:02.623046 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:35:03.620862 kubelet[3123]: E1101 00:35:03.620838 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:35:04.621725 kubelet[3123]: E1101 00:35:04.621652 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:35:06.623603 kubelet[3123]: E1101 00:35:06.623460 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:35:07.624565 kubelet[3123]: E1101 00:35:07.624437 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:35:10.624045 kubelet[3123]: E1101 00:35:10.623917 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:35:13.623535 kubelet[3123]: E1101 00:35:13.623387 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:35:14.627211 kubelet[3123]: E1101 00:35:14.627119 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:35:18.623524 kubelet[3123]: E1101 00:35:18.623460 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:35:18.624304 kubelet[3123]: E1101 00:35:18.623520 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:35:22.623107 kubelet[3123]: E1101 00:35:22.623074 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:35:24.623203 kubelet[3123]: E1101 00:35:24.623084 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:35:25.622408 kubelet[3123]: E1101 00:35:25.622370 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:35:26.621159 kubelet[3123]: E1101 00:35:26.621127 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:35:31.622713 kubelet[3123]: E1101 00:35:31.622637 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:35:33.621080 kubelet[3123]: E1101 00:35:33.621028 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:35:34.623287 kubelet[3123]: E1101 00:35:34.623208 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:35:35.621252 kubelet[3123]: E1101 00:35:35.621204 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:35:37.624186 kubelet[3123]: E1101 00:35:37.624047 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:35:38.632558 kubelet[3123]: E1101 00:35:38.632449 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:35:45.623246 kubelet[3123]: E1101 00:35:45.623083 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:35:46.623579 kubelet[3123]: E1101 00:35:46.623463 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:35:47.623563 kubelet[3123]: E1101 00:35:47.623479 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:35:47.627144 kubelet[3123]: E1101 00:35:47.625422 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:35:52.630348 kubelet[3123]: E1101 00:35:52.630214 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:35:53.623296 kubelet[3123]: E1101 00:35:53.623151 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:35:58.622817 kubelet[3123]: E1101 00:35:58.622749 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:36:00.623224 kubelet[3123]: E1101 00:36:00.623122 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:36:00.623224 kubelet[3123]: E1101 00:36:00.623086 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:36:02.624492 kubelet[3123]: E1101 00:36:02.624367 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:36:06.631357 kubelet[3123]: E1101 00:36:06.631315 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:36:06.632253 kubelet[3123]: E1101 00:36:06.632234 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:36:10.620714 kubelet[3123]: E1101 00:36:10.620688 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:36:12.625153 kubelet[3123]: E1101 00:36:12.625102 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:36:14.633280 kubelet[3123]: E1101 00:36:14.633178 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:36:16.624631 kubelet[3123]: E1101 00:36:16.624529 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:36:17.623993 kubelet[3123]: E1101 00:36:17.623901 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:36:19.622429 kubelet[3123]: E1101 00:36:19.622345 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:36:23.620946 kubelet[3123]: E1101 00:36:23.620888 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:36:23.620946 kubelet[3123]: E1101 00:36:23.620897 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:36:28.624902 kubelet[3123]: E1101 00:36:28.624822 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:36:28.626185 kubelet[3123]: E1101 00:36:28.626008 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:36:30.622991 kubelet[3123]: E1101 00:36:30.622896 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:36:33.623446 kubelet[3123]: E1101 00:36:33.623312 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:36:36.623059 kubelet[3123]: E1101 00:36:36.622952 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:36:38.621761 kubelet[3123]: E1101 00:36:38.621720 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:36:39.621769 kubelet[3123]: E1101 00:36:39.621721 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:36:39.622043 kubelet[3123]: E1101 00:36:39.622020 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:36:42.624139 kubelet[3123]: E1101 00:36:42.624011 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:36:44.630368 kubelet[3123]: E1101 00:36:44.630256 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:36:51.623046 kubelet[3123]: E1101 00:36:51.622910 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:36:51.624322 kubelet[3123]: E1101 00:36:51.623218 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:36:51.624322 kubelet[3123]: E1101 00:36:51.623304 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:36:54.622299 kubelet[3123]: E1101 00:36:54.622244 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:36:54.622758 kubelet[3123]: E1101 00:36:54.622295 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:36:57.623745 kubelet[3123]: E1101 00:36:57.623655 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:37:04.623234 kubelet[3123]: E1101 00:37:04.623088 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:37:05.622790 kubelet[3123]: E1101 00:37:05.622698 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:37:05.623796 kubelet[3123]: E1101 00:37:05.623701 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:37:06.621342 kubelet[3123]: E1101 00:37:06.621294 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:37:08.623176 kubelet[3123]: E1101 00:37:08.623109 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:37:11.620990 kubelet[3123]: E1101 00:37:11.620915 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:37:18.624448 kubelet[3123]: E1101 00:37:18.624382 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:37:18.625184 containerd[1835]: time="2025-11-01T00:37:18.624820742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:37:18.984581 containerd[1835]: time="2025-11-01T00:37:18.984362947Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:18.985226 containerd[1835]: time="2025-11-01T00:37:18.985200615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:37:18.985295 containerd[1835]: time="2025-11-01T00:37:18.985261978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:37:18.985417 kubelet[3123]: E1101 00:37:18.985370 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:37:18.985417 kubelet[3123]: E1101 00:37:18.985399 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:37:18.985477 kubelet[3123]: E1101 00:37:18.985449 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75c65644df-6srvh_calico-system(055e53bc-992b-4781-aa59-63b9452c2f8e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:18.985525 kubelet[3123]: E1101 00:37:18.985471 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:37:19.621482 kubelet[3123]: E1101 00:37:19.621422 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:37:20.623283 containerd[1835]: time="2025-11-01T00:37:20.623164910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:37:20.977636 containerd[1835]: time="2025-11-01T00:37:20.977556179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:20.978027 containerd[1835]: time="2025-11-01T00:37:20.978005019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:37:20.978081 containerd[1835]: time="2025-11-01T00:37:20.978058361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:37:20.978183 kubelet[3123]: E1101 00:37:20.978160 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:37:20.978399 kubelet[3123]: E1101 00:37:20.978190 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:37:20.978399 kubelet[3123]: E1101 00:37:20.978268 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-7nj5x_calico-apiserver(3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:20.978399 kubelet[3123]: E1101 00:37:20.978295 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:37:23.624333 kubelet[3123]: E1101 00:37:23.624192 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:37:24.622470 kubelet[3123]: E1101 00:37:24.622393 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:37:30.621214 kubelet[3123]: E1101 00:37:30.621160 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:37:30.621489 containerd[1835]: time="2025-11-01T00:37:30.621280109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:37:30.978201 containerd[1835]: time="2025-11-01T00:37:30.977898510Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:30.979080 containerd[1835]: time="2025-11-01T00:37:30.979018409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:37:30.979134 containerd[1835]: time="2025-11-01T00:37:30.979086718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:37:30.979250 kubelet[3123]: E1101 00:37:30.979200 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:37:30.979250 kubelet[3123]: E1101 00:37:30.979227 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:37:30.979309 kubelet[3123]: E1101 00:37:30.979281 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:30.979690 containerd[1835]: time="2025-11-01T00:37:30.979678370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:37:31.358186 containerd[1835]: time="2025-11-01T00:37:31.358061417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:31.359006 containerd[1835]: time="2025-11-01T00:37:31.358978231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:37:31.359068 containerd[1835]: time="2025-11-01T00:37:31.359037991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:37:31.359214 kubelet[3123]: E1101 00:37:31.359190 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:37:31.359248 kubelet[3123]: E1101 00:37:31.359222 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:37:31.359324 kubelet[3123]: E1101 00:37:31.359277 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-544cf4559-wdzr2_calico-system(e2d29ca0-c92c-40d7-8210-622ae9e53eeb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:31.359355 kubelet[3123]: E1101 00:37:31.359313 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:37:32.623207 kubelet[3123]: E1101 00:37:32.623075 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:37:35.621565 containerd[1835]: time="2025-11-01T00:37:35.621543207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:37:35.988387 containerd[1835]: time="2025-11-01T00:37:35.988125845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:35.989231 containerd[1835]: time="2025-11-01T00:37:35.989123272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:37:35.989231 containerd[1835]: time="2025-11-01T00:37:35.989183416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:37:35.989364 kubelet[3123]: E1101 00:37:35.989313 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:37:35.989364 kubelet[3123]: E1101 00:37:35.989340 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:37:35.989602 kubelet[3123]: E1101 00:37:35.989441 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57458876-8h7pk_calico-apiserver(68ef77d9-c28e-4552-8ad9-f26358f8691b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:35.989602 kubelet[3123]: E1101 00:37:35.989485 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:37:35.989663 containerd[1835]: time="2025-11-01T00:37:35.989564282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:37:36.345298 containerd[1835]: time="2025-11-01T00:37:36.345165246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:36.346254 containerd[1835]: time="2025-11-01T00:37:36.346189473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:37:36.346306 containerd[1835]: time="2025-11-01T00:37:36.346256891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:37:36.346414 kubelet[3123]: E1101 00:37:36.346358 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:37:36.346414 kubelet[3123]: E1101 00:37:36.346391 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:37:36.346479 kubelet[3123]: E1101 00:37:36.346435 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:36.346866 containerd[1835]: time="2025-11-01T00:37:36.346822570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:37:36.621241 kubelet[3123]: E1101 00:37:36.621158 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:37:36.697381 containerd[1835]: time="2025-11-01T00:37:36.697301482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:36.698224 containerd[1835]: time="2025-11-01T00:37:36.698199447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:37:36.698290 containerd[1835]: time="2025-11-01T00:37:36.698269017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:37:36.698356 kubelet[3123]: E1101 00:37:36.698339 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:37:36.698384 kubelet[3123]: E1101 00:37:36.698363 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:37:36.698416 kubelet[3123]: E1101 00:37:36.698407 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-vvbjm_calico-system(0245246d-bdc5-450d-b21c-5eff759295d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:36.698455 kubelet[3123]: E1101 00:37:36.698432 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:37:42.623284 kubelet[3123]: E1101 00:37:42.623178 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:37:46.623006 kubelet[3123]: E1101 00:37:46.622928 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:37:47.623026 containerd[1835]: time="2025-11-01T00:37:47.622959506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:37:47.971622 containerd[1835]: time="2025-11-01T00:37:47.971352990Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:37:47.972403 containerd[1835]: time="2025-11-01T00:37:47.972345285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:37:47.972469 containerd[1835]: time="2025-11-01T00:37:47.972389563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:37:47.972515 kubelet[3123]: E1101 00:37:47.972488 3123 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:37:47.972717 kubelet[3123]: E1101 00:37:47.972523 3123 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:37:47.972717 kubelet[3123]: E1101 00:37:47.972577 3123 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t7ml5_calico-system(0e047d2f-1491-42f0-a675-eff64087e5dd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:37:47.972717 kubelet[3123]: E1101 00:37:47.972602 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:37:48.623198 kubelet[3123]: E1101 00:37:48.623127 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:37:48.623198 kubelet[3123]: E1101 00:37:48.623157 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:37:50.623486 kubelet[3123]: E1101 00:37:50.623381 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:37:52.332619 systemd[1]: Started sshd@11-139.178.94.145:22-139.178.89.65:48546.service - OpenSSH per-connection server daemon (139.178.89.65:48546). Nov 1 00:37:52.379864 sshd[7458]: Accepted publickey for core from 139.178.89.65 port 48546 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:37:52.381002 sshd[7458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:52.385039 systemd-logind[1818]: New session 14 of user core. Nov 1 00:37:52.407315 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:37:52.552029 sshd[7458]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:52.554009 systemd[1]: sshd@11-139.178.94.145:22-139.178.89.65:48546.service: Deactivated successfully. Nov 1 00:37:52.555159 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:37:52.556000 systemd-logind[1818]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:37:52.556758 systemd-logind[1818]: Removed session 14. Nov 1 00:37:57.575695 systemd[1]: Started sshd@12-139.178.94.145:22-139.178.89.65:45968.service - OpenSSH per-connection server daemon (139.178.89.65:45968). Nov 1 00:37:57.607206 sshd[7523]: Accepted publickey for core from 139.178.89.65 port 45968 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:37:57.608013 sshd[7523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:57.610567 systemd-logind[1818]: New session 15 of user core. Nov 1 00:37:57.621503 kubelet[3123]: E1101 00:37:57.621483 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:37:57.626409 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:37:57.729623 sshd[7523]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:57.735669 systemd[1]: sshd@12-139.178.94.145:22-139.178.89.65:45968.service: Deactivated successfully. Nov 1 00:37:57.736530 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:37:57.736853 systemd-logind[1818]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:37:57.737331 systemd-logind[1818]: Removed session 15. Nov 1 00:37:58.622716 kubelet[3123]: E1101 00:37:58.622657 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:37:59.621125 kubelet[3123]: E1101 00:37:59.621100 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:37:59.621125 kubelet[3123]: E1101 00:37:59.621100 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:38:01.622422 kubelet[3123]: E1101 00:38:01.622367 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:38:02.744175 systemd[1]: Started sshd@13-139.178.94.145:22-139.178.89.65:45972.service - OpenSSH per-connection server daemon (139.178.89.65:45972). Nov 1 00:38:02.775907 sshd[7558]: Accepted publickey for core from 139.178.89.65 port 45972 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:02.776700 sshd[7558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:02.779392 systemd-logind[1818]: New session 16 of user core. Nov 1 00:38:02.788265 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:38:02.880228 sshd[7558]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:02.897075 systemd[1]: sshd@13-139.178.94.145:22-139.178.89.65:45972.service: Deactivated successfully. Nov 1 00:38:02.898007 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:38:02.898675 systemd-logind[1818]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:38:02.899455 systemd[1]: Started sshd@14-139.178.94.145:22-139.178.89.65:45976.service - OpenSSH per-connection server daemon (139.178.89.65:45976). Nov 1 00:38:02.899940 systemd-logind[1818]: Removed session 16. Nov 1 00:38:02.930655 sshd[7585]: Accepted publickey for core from 139.178.89.65 port 45976 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:02.931439 sshd[7585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:02.933946 systemd-logind[1818]: New session 17 of user core. Nov 1 00:38:02.945235 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:38:03.084990 sshd[7585]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:03.094791 systemd[1]: sshd@14-139.178.94.145:22-139.178.89.65:45976.service: Deactivated successfully. Nov 1 00:38:03.095679 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:38:03.096360 systemd-logind[1818]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:38:03.097087 systemd[1]: Started sshd@15-139.178.94.145:22-139.178.89.65:45990.service - OpenSSH per-connection server daemon (139.178.89.65:45990). Nov 1 00:38:03.097508 systemd-logind[1818]: Removed session 17. Nov 1 00:38:03.129049 sshd[7611]: Accepted publickey for core from 139.178.89.65 port 45990 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:03.129802 sshd[7611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:03.132297 systemd-logind[1818]: New session 18 of user core. Nov 1 00:38:03.141520 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:38:03.275925 sshd[7611]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:03.277535 systemd[1]: sshd@15-139.178.94.145:22-139.178.89.65:45990.service: Deactivated successfully. Nov 1 00:38:03.278427 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:38:03.279038 systemd-logind[1818]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:38:03.279496 systemd-logind[1818]: Removed session 18. Nov 1 00:38:04.622060 kubelet[3123]: E1101 00:38:04.622017 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:38:08.305815 systemd[1]: Started sshd@16-139.178.94.145:22-139.178.89.65:53212.service - OpenSSH per-connection server daemon (139.178.89.65:53212). Nov 1 00:38:08.371724 sshd[7641]: Accepted publickey for core from 139.178.89.65 port 53212 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:08.372608 sshd[7641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:08.375826 systemd-logind[1818]: New session 19 of user core. Nov 1 00:38:08.391261 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:38:08.547747 sshd[7641]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:08.549821 systemd[1]: sshd@16-139.178.94.145:22-139.178.89.65:53212.service: Deactivated successfully. Nov 1 00:38:08.550992 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:38:08.551855 systemd-logind[1818]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:38:08.552732 systemd-logind[1818]: Removed session 19. Nov 1 00:38:09.624220 kubelet[3123]: E1101 00:38:09.624055 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:38:10.621547 kubelet[3123]: E1101 00:38:10.621480 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:38:12.622848 kubelet[3123]: E1101 00:38:12.622742 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:38:13.589472 systemd[1]: Started sshd@17-139.178.94.145:22-139.178.89.65:53218.service - OpenSSH per-connection server daemon (139.178.89.65:53218). Nov 1 00:38:13.645709 sshd[7670]: Accepted publickey for core from 139.178.89.65 port 53218 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:13.647089 sshd[7670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:13.651395 systemd-logind[1818]: New session 20 of user core. Nov 1 00:38:13.668318 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:38:13.776063 sshd[7670]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:13.777594 systemd[1]: sshd@17-139.178.94.145:22-139.178.89.65:53218.service: Deactivated successfully. Nov 1 00:38:13.778520 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:38:13.779138 systemd-logind[1818]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:38:13.779782 systemd-logind[1818]: Removed session 20. Nov 1 00:38:14.625323 kubelet[3123]: E1101 00:38:14.625269 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:38:15.620738 kubelet[3123]: E1101 00:38:15.620710 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:38:16.624877 kubelet[3123]: E1101 00:38:16.624747 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:38:18.789156 systemd[1]: Started sshd@18-139.178.94.145:22-139.178.89.65:58012.service - OpenSSH per-connection server daemon (139.178.89.65:58012). Nov 1 00:38:18.830014 sshd[7699]: Accepted publickey for core from 139.178.89.65 port 58012 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:18.830876 sshd[7699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:18.833867 systemd-logind[1818]: New session 21 of user core. Nov 1 00:38:18.843252 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:38:18.921131 sshd[7699]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:18.922762 systemd[1]: sshd@18-139.178.94.145:22-139.178.89.65:58012.service: Deactivated successfully. Nov 1 00:38:18.923757 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:38:18.924407 systemd-logind[1818]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:38:18.924874 systemd-logind[1818]: Removed session 21. Nov 1 00:38:23.936676 systemd[1]: Started sshd@19-139.178.94.145:22-139.178.89.65:58028.service - OpenSSH per-connection server daemon (139.178.89.65:58028). Nov 1 00:38:23.977976 sshd[7725]: Accepted publickey for core from 139.178.89.65 port 58028 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:23.978745 sshd[7725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:23.981059 systemd-logind[1818]: New session 22 of user core. Nov 1 00:38:23.992263 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:38:24.100280 sshd[7725]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:24.117746 systemd[1]: sshd@19-139.178.94.145:22-139.178.89.65:58028.service: Deactivated successfully. Nov 1 00:38:24.118514 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:38:24.119116 systemd-logind[1818]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:38:24.119793 systemd[1]: Started sshd@20-139.178.94.145:22-139.178.89.65:58030.service - OpenSSH per-connection server daemon (139.178.89.65:58030). Nov 1 00:38:24.120235 systemd-logind[1818]: Removed session 22. Nov 1 00:38:24.151474 sshd[7751]: Accepted publickey for core from 139.178.89.65 port 58030 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:24.152331 sshd[7751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:24.154935 systemd-logind[1818]: New session 23 of user core. Nov 1 00:38:24.166415 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:38:24.314411 sshd[7751]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:24.329565 systemd[1]: sshd@20-139.178.94.145:22-139.178.89.65:58030.service: Deactivated successfully. Nov 1 00:38:24.330297 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:38:24.330931 systemd-logind[1818]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:38:24.331608 systemd[1]: Started sshd@21-139.178.94.145:22-139.178.89.65:58046.service - OpenSSH per-connection server daemon (139.178.89.65:58046). Nov 1 00:38:24.332025 systemd-logind[1818]: Removed session 23. Nov 1 00:38:24.363590 sshd[7807]: Accepted publickey for core from 139.178.89.65 port 58046 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:24.364405 sshd[7807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:24.367355 systemd-logind[1818]: New session 24 of user core. Nov 1 00:38:24.387374 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:38:24.623910 kubelet[3123]: E1101 00:38:24.623637 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:38:24.625220 kubelet[3123]: E1101 00:38:24.624889 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-544cf4559-wdzr2" podUID="e2d29ca0-c92c-40d7-8210-622ae9e53eeb" Nov 1 00:38:25.059462 sshd[7807]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:25.070842 systemd[1]: sshd@21-139.178.94.145:22-139.178.89.65:58046.service: Deactivated successfully. Nov 1 00:38:25.071706 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:38:25.072419 systemd-logind[1818]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:38:25.073064 systemd[1]: Started sshd@22-139.178.94.145:22-139.178.89.65:58048.service - OpenSSH per-connection server daemon (139.178.89.65:58048). Nov 1 00:38:25.073465 systemd-logind[1818]: Removed session 24. Nov 1 00:38:25.104476 sshd[7838]: Accepted publickey for core from 139.178.89.65 port 58048 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:25.105262 sshd[7838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:25.107881 systemd-logind[1818]: New session 25 of user core. Nov 1 00:38:25.129303 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:38:25.273080 sshd[7838]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:25.284733 systemd[1]: sshd@22-139.178.94.145:22-139.178.89.65:58048.service: Deactivated successfully. Nov 1 00:38:25.285570 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:38:25.286275 systemd-logind[1818]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:38:25.286930 systemd[1]: Started sshd@23-139.178.94.145:22-139.178.89.65:58064.service - OpenSSH per-connection server daemon (139.178.89.65:58064). Nov 1 00:38:25.287509 systemd-logind[1818]: Removed session 25. Nov 1 00:38:25.319033 sshd[7863]: Accepted publickey for core from 139.178.89.65 port 58064 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:25.319764 sshd[7863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:25.322403 systemd-logind[1818]: New session 26 of user core. Nov 1 00:38:25.332574 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:38:25.459600 sshd[7863]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:25.460988 systemd[1]: sshd@23-139.178.94.145:22-139.178.89.65:58064.service: Deactivated successfully. Nov 1 00:38:25.461899 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:38:25.462626 systemd-logind[1818]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:38:25.463116 systemd-logind[1818]: Removed session 26. Nov 1 00:38:25.623699 kubelet[3123]: E1101 00:38:25.623431 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t7ml5" podUID="0e047d2f-1491-42f0-a675-eff64087e5dd" Nov 1 00:38:27.626726 kubelet[3123]: E1101 00:38:27.626642 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vvbjm" podUID="0245246d-bdc5-450d-b21c-5eff759295d4" Nov 1 00:38:28.624629 kubelet[3123]: E1101 00:38:28.624542 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-8h7pk" podUID="68ef77d9-c28e-4552-8ad9-f26358f8691b" Nov 1 00:38:28.624629 kubelet[3123]: E1101 00:38:28.624570 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57458876-7nj5x" podUID="3be7fc93-dc6d-492b-bf0b-0eb6ed63fef5" Nov 1 00:38:30.479838 systemd[1]: Started sshd@24-139.178.94.145:22-139.178.89.65:39838.service - OpenSSH per-connection server daemon (139.178.89.65:39838). Nov 1 00:38:30.545911 sshd[7894]: Accepted publickey for core from 139.178.89.65 port 39838 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:30.549949 sshd[7894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:30.561929 systemd-logind[1818]: New session 27 of user core. Nov 1 00:38:30.578358 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:38:30.695620 sshd[7894]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:30.699322 systemd[1]: sshd@24-139.178.94.145:22-139.178.89.65:39838.service: Deactivated successfully. Nov 1 00:38:30.700976 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:38:30.701779 systemd-logind[1818]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:38:30.702752 systemd-logind[1818]: Removed session 27. Nov 1 00:38:35.622981 kubelet[3123]: E1101 00:38:35.622893 3123 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75c65644df-6srvh" podUID="055e53bc-992b-4781-aa59-63b9452c2f8e" Nov 1 00:38:35.737597 systemd[1]: Started sshd@25-139.178.94.145:22-139.178.89.65:39850.service - OpenSSH per-connection server daemon (139.178.89.65:39850). Nov 1 00:38:35.784039 sshd[7922]: Accepted publickey for core from 139.178.89.65 port 39850 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 00:38:35.785169 sshd[7922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:38:35.788898 systemd-logind[1818]: New session 28 of user core. Nov 1 00:38:35.807370 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 1 00:38:35.931500 update_engine[1823]: I20251101 00:38:35.931375 1823 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:38:35.931500 update_engine[1823]: I20251101 00:38:35.931411 1823 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:38:35.931905 update_engine[1823]: I20251101 00:38:35.931550 1823 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:38:35.931905 update_engine[1823]: I20251101 00:38:35.931885 1823 omaha_request_params.cc:62] Current group set to lts Nov 1 00:38:35.931991 update_engine[1823]: I20251101 00:38:35.931977 1823 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:38:35.931991 update_engine[1823]: I20251101 00:38:35.931988 1823 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:38:35.932050 update_engine[1823]: I20251101 00:38:35.931999 1823 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:38:35.932050 update_engine[1823]: I20251101 00:38:35.932026 1823 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:38:35.932113 update_engine[1823]: I20251101 00:38:35.932076 1823 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 00:38:35.932113 update_engine[1823]: I20251101 00:38:35.932086 1823 omaha_request_action.cc:272] Request: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: Nov 1 00:38:35.932113 update_engine[1823]: I20251101 00:38:35.932099 1823 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:38:35.932379 locksmithd[1871]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 00:38:35.933228 update_engine[1823]: I20251101 00:38:35.933182 1823 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:38:35.933435 update_engine[1823]: I20251101 00:38:35.933388 1823 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:38:35.933851 update_engine[1823]: E20251101 00:38:35.933800 1823 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:38:35.933851 update_engine[1823]: I20251101 00:38:35.933847 1823 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:38:35.947435 sshd[7922]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:35.949697 systemd[1]: sshd@25-139.178.94.145:22-139.178.89.65:39850.service: Deactivated successfully. Nov 1 00:38:35.950988 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:38:35.951538 systemd-logind[1818]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:38:35.952070 systemd-logind[1818]: Removed session 28.