Nov 8 00:31:06.035526 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:31:06.035563 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:31:06.035577 kernel: BIOS-provided physical RAM map: Nov 8 00:31:06.035585 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:31:06.035592 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 8 00:31:06.035600 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 8 00:31:06.035608 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 8 00:31:06.035616 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 8 00:31:06.035624 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Nov 8 00:31:06.035631 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Nov 8 00:31:06.035642 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Nov 8 00:31:06.035650 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Nov 8 00:31:06.035661 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Nov 8 00:31:06.035669 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Nov 8 00:31:06.035681 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Nov 8 00:31:06.035689 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 8 00:31:06.035701 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Nov 8 00:31:06.035709 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Nov 8 00:31:06.035717 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 8 00:31:06.035725 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:31:06.035733 kernel: NX (Execute Disable) protection: active Nov 8 00:31:06.035741 kernel: APIC: Static calls initialized Nov 8 00:31:06.035749 kernel: efi: EFI v2.7 by EDK II Nov 8 00:31:06.035757 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Nov 8 00:31:06.035765 kernel: SMBIOS 2.8 present. Nov 8 00:31:06.035781 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Nov 8 00:31:06.035792 kernel: Hypervisor detected: KVM Nov 8 00:31:06.035859 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:31:06.035868 kernel: kvm-clock: using sched offset of 5125309990 cycles Nov 8 00:31:06.035877 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:31:06.035885 kernel: tsc: Detected 2794.750 MHz processor Nov 8 00:31:06.035894 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:31:06.035903 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:31:06.035911 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Nov 8 00:31:06.035920 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:31:06.035928 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:31:06.035940 kernel: Using GB pages for direct mapping Nov 8 00:31:06.035948 kernel: Secure boot disabled Nov 8 00:31:06.035956 kernel: ACPI: Early table checksum verification disabled Nov 8 00:31:06.035965 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 8 00:31:06.035978 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:31:06.035987 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.035996 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.036008 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 8 00:31:06.036016 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.036028 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.036037 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.036051 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:31:06.036060 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:31:06.036069 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 8 00:31:06.036082 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 8 00:31:06.036091 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 8 00:31:06.036100 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 8 00:31:06.036108 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 8 00:31:06.036117 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 8 00:31:06.036126 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 8 00:31:06.036135 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 8 00:31:06.036146 kernel: No NUMA configuration found Nov 8 00:31:06.036157 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Nov 8 00:31:06.036169 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Nov 8 00:31:06.036178 kernel: Zone ranges: Nov 8 00:31:06.036187 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:31:06.036196 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Nov 8 00:31:06.036204 kernel: Normal empty Nov 8 00:31:06.036228 kernel: Movable zone start for each node Nov 8 00:31:06.036244 kernel: Early memory node ranges Nov 8 00:31:06.036255 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:31:06.036264 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 8 00:31:06.036273 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 8 00:31:06.036294 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Nov 8 00:31:06.036303 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Nov 8 00:31:06.036312 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Nov 8 00:31:06.036330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Nov 8 00:31:06.036350 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:31:06.036365 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:31:06.036374 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 8 00:31:06.036391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:31:06.036401 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Nov 8 00:31:06.036422 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:31:06.036431 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Nov 8 00:31:06.036440 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:31:06.036455 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:31:06.036465 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:31:06.036474 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:31:06.036483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:31:06.036492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:31:06.036501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:31:06.036513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:31:06.036522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:31:06.036531 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:31:06.036540 kernel: TSC deadline timer available Nov 8 00:31:06.036548 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:31:06.036565 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:31:06.036574 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:31:06.036582 kernel: kvm-guest: setup PV sched yield Nov 8 00:31:06.036591 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:31:06.036600 kernel: Booting paravirtualized kernel on KVM Nov 8 00:31:06.036612 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:31:06.036621 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:31:06.036630 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:31:06.036639 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:31:06.036648 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:31:06.036657 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:31:06.036665 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:31:06.036676 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:31:06.036690 kernel: random: crng init done Nov 8 00:31:06.036699 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:31:06.036708 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:31:06.036717 kernel: Fallback order for Node 0: 0 Nov 8 00:31:06.036726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Nov 8 00:31:06.036735 kernel: Policy zone: DMA32 Nov 8 00:31:06.036743 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:31:06.036753 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 166140K reserved, 0K cma-reserved) Nov 8 00:31:06.036771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:31:06.036785 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:31:06.036793 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:31:06.036820 kernel: Dynamic Preempt: voluntary Nov 8 00:31:06.036829 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:31:06.036849 kernel: rcu: RCU event tracing is enabled. Nov 8 00:31:06.036871 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:31:06.036889 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:31:06.036899 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:31:06.036908 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:31:06.036918 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:31:06.036927 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:31:06.036936 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:31:06.036949 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:31:06.036958 kernel: Console: colour dummy device 80x25 Nov 8 00:31:06.036968 kernel: printk: console [ttyS0] enabled Nov 8 00:31:06.036979 kernel: ACPI: Core revision 20230628 Nov 8 00:31:06.036989 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:31:06.037003 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:31:06.037013 kernel: x2apic enabled Nov 8 00:31:06.037022 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:31:06.037031 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:31:06.037041 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:31:06.037054 kernel: kvm-guest: setup PV IPIs Nov 8 00:31:06.037074 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:31:06.037084 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:31:06.037093 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Nov 8 00:31:06.037109 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:31:06.037118 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:31:06.037127 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:31:06.037136 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:31:06.037146 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:31:06.037155 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:31:06.037164 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:31:06.037173 kernel: active return thunk: retbleed_return_thunk Nov 8 00:31:06.037182 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:31:06.037197 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:31:06.037206 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:31:06.037216 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:31:06.037236 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:31:06.037246 kernel: active return thunk: srso_return_thunk Nov 8 00:31:06.037255 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:31:06.037265 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:31:06.037274 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:31:06.037287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:31:06.037296 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:31:06.037310 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:31:06.037319 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:31:06.037329 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:31:06.037338 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:31:06.037347 kernel: landlock: Up and running. Nov 8 00:31:06.037356 kernel: SELinux: Initializing. Nov 8 00:31:06.037365 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:31:06.037378 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:31:06.037387 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:31:06.037512 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:31:06.037521 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:31:06.037530 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:31:06.037540 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:31:06.037549 kernel: ... version: 0 Nov 8 00:31:06.037566 kernel: ... bit width: 48 Nov 8 00:31:06.037575 kernel: ... generic registers: 6 Nov 8 00:31:06.037588 kernel: ... value mask: 0000ffffffffffff Nov 8 00:31:06.037597 kernel: ... max period: 00007fffffffffff Nov 8 00:31:06.037606 kernel: ... fixed-purpose events: 0 Nov 8 00:31:06.037615 kernel: ... event mask: 000000000000003f Nov 8 00:31:06.037624 kernel: signal: max sigframe size: 1776 Nov 8 00:31:06.037633 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:31:06.037643 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:31:06.037653 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:31:06.037662 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:31:06.037674 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:31:06.037683 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:31:06.037692 kernel: smpboot: Max logical packages: 1 Nov 8 00:31:06.037701 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Nov 8 00:31:06.037711 kernel: devtmpfs: initialized Nov 8 00:31:06.037720 kernel: x86/mm: Memory block size: 128MB Nov 8 00:31:06.037729 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 8 00:31:06.037738 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 8 00:31:06.037748 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Nov 8 00:31:06.037760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 8 00:31:06.037769 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 8 00:31:06.037778 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:31:06.037787 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:31:06.037797 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:31:06.037829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:31:06.037838 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:31:06.037847 kernel: audit: type=2000 audit(1762561864.805:1): state=initialized audit_enabled=0 res=1 Nov 8 00:31:06.037856 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:31:06.037869 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:31:06.037878 kernel: cpuidle: using governor menu Nov 8 00:31:06.037888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:31:06.037897 kernel: dca service started, version 1.12.1 Nov 8 00:31:06.037906 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:31:06.037915 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:31:06.037925 kernel: PCI: Using configuration type 1 for base access Nov 8 00:31:06.037934 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:31:06.037943 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:31:06.037955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:31:06.037964 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:31:06.037973 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:31:06.037983 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:31:06.037992 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:31:06.038001 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:31:06.038010 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:31:06.038019 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:31:06.038028 kernel: ACPI: Interpreter enabled Nov 8 00:31:06.038040 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:31:06.038050 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:31:06.038059 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:31:06.038068 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:31:06.038077 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:31:06.038086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:31:06.038389 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:31:06.038544 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:31:06.038702 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:31:06.038715 kernel: PCI host bridge to bus 0000:00 Nov 8 00:31:06.038940 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:31:06.039076 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:31:06.039206 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:31:06.039340 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:31:06.039578 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:31:06.039715 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Nov 8 00:31:06.039862 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:31:06.040034 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:31:06.040194 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:31:06.040341 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Nov 8 00:31:06.040503 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Nov 8 00:31:06.040702 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:31:06.040866 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Nov 8 00:31:06.041013 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:31:06.041414 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:31:06.041577 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Nov 8 00:31:06.041731 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Nov 8 00:31:06.041892 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Nov 8 00:31:06.042059 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:31:06.042204 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Nov 8 00:31:06.042346 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Nov 8 00:31:06.042490 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Nov 8 00:31:06.042689 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:31:06.042855 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Nov 8 00:31:06.043015 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Nov 8 00:31:06.043161 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Nov 8 00:31:06.043313 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Nov 8 00:31:06.043478 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:31:06.043651 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:31:06.043853 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:31:06.044019 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Nov 8 00:31:06.044169 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Nov 8 00:31:06.044372 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:31:06.044536 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Nov 8 00:31:06.044560 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:31:06.044570 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:31:06.044580 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:31:06.044589 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:31:06.044613 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:31:06.044631 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:31:06.044641 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:31:06.044651 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:31:06.044663 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:31:06.044703 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:31:06.044715 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:31:06.044724 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:31:06.044733 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:31:06.044743 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:31:06.044759 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:31:06.044768 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:31:06.044777 kernel: iommu: Default domain type: Translated Nov 8 00:31:06.044786 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:31:06.044863 kernel: efivars: Registered efivars operations Nov 8 00:31:06.044874 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:31:06.044892 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:31:06.044901 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 8 00:31:06.044911 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Nov 8 00:31:06.044925 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Nov 8 00:31:06.044937 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Nov 8 00:31:06.045124 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:31:06.045301 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:31:06.045464 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:31:06.045477 kernel: vgaarb: loaded Nov 8 00:31:06.045486 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:31:06.045496 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:31:06.045506 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:31:06.045520 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:31:06.045529 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:31:06.045539 kernel: pnp: PnP ACPI init Nov 8 00:31:06.046057 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:31:06.046074 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:31:06.046084 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:31:06.046094 kernel: NET: Registered PF_INET protocol family Nov 8 00:31:06.046103 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:31:06.046122 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:31:06.046131 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:31:06.046143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:31:06.046153 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:31:06.046162 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:31:06.046172 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:31:06.046181 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:31:06.046190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:31:06.046200 kernel: NET: Registered PF_XDP protocol family Nov 8 00:31:06.046354 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Nov 8 00:31:06.046497 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Nov 8 00:31:06.046644 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:31:06.046781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:31:06.046970 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:31:06.047102 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:31:06.047244 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:31:06.047408 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Nov 8 00:31:06.047428 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:31:06.047438 kernel: Initialise system trusted keyrings Nov 8 00:31:06.047448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:31:06.047457 kernel: Key type asymmetric registered Nov 8 00:31:06.047467 kernel: Asymmetric key parser 'x509' registered Nov 8 00:31:06.047476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:31:06.047486 kernel: io scheduler mq-deadline registered Nov 8 00:31:06.047496 kernel: io scheduler kyber registered Nov 8 00:31:06.047505 kernel: io scheduler bfq registered Nov 8 00:31:06.047517 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:31:06.047527 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:31:06.047537 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:31:06.047547 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:31:06.047564 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:31:06.047573 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:31:06.047583 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:31:06.047592 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:31:06.047602 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:31:06.047614 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:31:06.047833 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:31:06.047992 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:31:06.048139 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:31:05 UTC (1762561865) Nov 8 00:31:06.048275 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:31:06.048288 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:31:06.048304 kernel: efifb: probing for efifb Nov 8 00:31:06.048323 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Nov 8 00:31:06.048334 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Nov 8 00:31:06.048344 kernel: efifb: scrolling: redraw Nov 8 00:31:06.048354 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Nov 8 00:31:06.048363 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:31:06.048373 kernel: fb0: EFI VGA frame buffer device Nov 8 00:31:06.048420 kernel: pstore: Using crash dump compression: deflate Nov 8 00:31:06.048435 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:31:06.048445 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:31:06.048458 kernel: Segment Routing with IPv6 Nov 8 00:31:06.048467 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:31:06.048477 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:31:06.048489 kernel: Key type dns_resolver registered Nov 8 00:31:06.048499 kernel: IPI shorthand broadcast: enabled Nov 8 00:31:06.048509 kernel: sched_clock: Marking stable (1211003932, 207354009)->(1598172743, -179814802) Nov 8 00:31:06.048519 kernel: registered taskstats version 1 Nov 8 00:31:06.048528 kernel: Loading compiled-in X.509 certificates Nov 8 00:31:06.048539 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:31:06.048548 kernel: Key type .fscrypt registered Nov 8 00:31:06.048569 kernel: Key type fscrypt-provisioning registered Nov 8 00:31:06.048579 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:31:06.048588 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:31:06.048598 kernel: ima: No architecture policies found Nov 8 00:31:06.048608 kernel: clk: Disabling unused clocks Nov 8 00:31:06.048617 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:31:06.048627 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:31:06.048637 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:31:06.048648 kernel: Run /init as init process Nov 8 00:31:06.048661 kernel: with arguments: Nov 8 00:31:06.048671 kernel: /init Nov 8 00:31:06.048683 kernel: with environment: Nov 8 00:31:06.048693 kernel: HOME=/ Nov 8 00:31:06.048703 kernel: TERM=linux Nov 8 00:31:06.048719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:31:06.048740 systemd[1]: Detected virtualization kvm. Nov 8 00:31:06.048766 systemd[1]: Detected architecture x86-64. Nov 8 00:31:06.048777 systemd[1]: Running in initrd. Nov 8 00:31:06.048790 systemd[1]: No hostname configured, using default hostname. Nov 8 00:31:06.048816 systemd[1]: Hostname set to . Nov 8 00:31:06.048827 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:31:06.048841 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:31:06.048852 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:31:06.048865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:31:06.048876 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:31:06.048887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:31:06.048898 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:31:06.048908 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:31:06.048924 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:31:06.048935 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:31:06.048945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:31:06.048956 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:31:06.048966 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:31:06.048977 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:31:06.048987 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:31:06.048997 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:31:06.049011 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:31:06.049021 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:31:06.049032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:31:06.049042 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:31:06.049053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:31:06.049064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:31:06.049077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:31:06.049087 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:31:06.049098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:31:06.049111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:31:06.049122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:31:06.049132 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:31:06.049143 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:31:06.049153 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:31:06.049164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:06.049181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:31:06.049191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:31:06.049208 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:31:06.049220 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:31:06.049256 systemd-journald[193]: Collecting audit messages is disabled. Nov 8 00:31:06.049284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:06.049295 systemd-journald[193]: Journal started Nov 8 00:31:06.049319 systemd-journald[193]: Runtime Journal (/run/log/journal/be70fccd568f4d81b37e19d20a64a313) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:31:06.028718 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:31:06.054692 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:31:06.058844 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:31:06.061328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:31:06.069837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:31:06.072711 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:31:06.074635 kernel: Bridge firewalling registered Nov 8 00:31:06.075058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:31:06.081043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:31:06.085044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:31:06.090882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:31:06.092708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:31:06.101383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:31:06.112018 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:31:06.113034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:31:06.124695 dracut-cmdline[225]: dracut-dracut-053 Nov 8 00:31:06.128302 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:31:06.130761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:31:06.148053 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:31:06.188356 systemd-resolved[243]: Positive Trust Anchors: Nov 8 00:31:06.188386 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:31:06.188422 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:31:06.192086 systemd-resolved[243]: Defaulting to hostname 'linux'. Nov 8 00:31:06.193941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:31:06.206218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:31:06.257863 kernel: SCSI subsystem initialized Nov 8 00:31:06.266839 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:31:06.278851 kernel: iscsi: registered transport (tcp) Nov 8 00:31:06.304870 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:31:06.304941 kernel: QLogic iSCSI HBA Driver Nov 8 00:31:06.362985 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:31:06.378036 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:31:06.410019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:31:06.410102 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:31:06.411794 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:31:06.456845 kernel: raid6: avx2x4 gen() 27436 MB/s Nov 8 00:31:06.473866 kernel: raid6: avx2x2 gen() 24730 MB/s Nov 8 00:31:06.491644 kernel: raid6: avx2x1 gen() 25686 MB/s Nov 8 00:31:06.491727 kernel: raid6: using algorithm avx2x4 gen() 27436 MB/s Nov 8 00:31:06.509638 kernel: raid6: .... xor() 6847 MB/s, rmw enabled Nov 8 00:31:06.509747 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:31:06.535855 kernel: xor: automatically using best checksumming function avx Nov 8 00:31:06.700841 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:31:06.715008 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:31:06.732008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:31:06.744629 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 8 00:31:06.749503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:31:06.765013 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:31:06.783236 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Nov 8 00:31:06.871870 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:31:06.880999 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:31:06.955952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:31:06.963948 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:31:06.978610 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:31:06.981585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:31:06.983424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:31:06.987204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:31:07.000187 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:31:07.015279 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:31:07.020832 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:31:07.028834 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:31:07.032874 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:31:07.037276 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:31:07.049061 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:31:07.049091 kernel: GPT:9289727 != 19775487 Nov 8 00:31:07.049101 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:31:07.049112 kernel: GPT:9289727 != 19775487 Nov 8 00:31:07.049122 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:31:07.049132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:31:07.044111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:31:07.056454 kernel: libata version 3.00 loaded. Nov 8 00:31:07.047257 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:31:07.049472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:31:07.049626 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:07.051688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:07.063060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:07.072883 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:31:07.072904 kernel: AES CTR mode by8 optimization enabled Nov 8 00:31:07.085568 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:31:07.089923 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (462) Nov 8 00:31:07.091601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:31:07.100372 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:31:07.100593 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Nov 8 00:31:07.100605 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:31:07.100621 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:31:07.100772 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:31:07.091726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:07.104061 kernel: scsi host0: ahci Nov 8 00:31:07.104291 kernel: scsi host1: ahci Nov 8 00:31:07.104482 kernel: scsi host2: ahci Nov 8 00:31:07.107858 kernel: scsi host3: ahci Nov 8 00:31:07.108129 kernel: scsi host4: ahci Nov 8 00:31:07.108426 kernel: scsi host5: ahci Nov 8 00:31:07.108671 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Nov 8 00:31:07.110957 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Nov 8 00:31:07.110985 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Nov 8 00:31:07.113989 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Nov 8 00:31:07.114014 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Nov 8 00:31:07.116366 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Nov 8 00:31:07.118008 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:31:07.130995 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:31:07.133232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:31:07.145552 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:31:07.165963 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:31:07.170877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:07.175046 disk-uuid[557]: Primary Header is updated. Nov 8 00:31:07.175046 disk-uuid[557]: Secondary Entries is updated. Nov 8 00:31:07.175046 disk-uuid[557]: Secondary Header is updated. Nov 8 00:31:07.180842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:31:07.186820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:31:07.191931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:07.196788 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:31:07.204036 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:31:07.234400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:31:07.423855 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:31:07.423950 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:31:07.426911 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:31:07.426992 kernel: ata3.00: applying bridge limits Nov 8 00:31:07.427949 kernel: ata3.00: configured for UDMA/100 Nov 8 00:31:07.431827 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:31:07.431849 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:31:07.432823 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:31:07.432846 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:31:07.435827 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:31:07.481369 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:31:07.481848 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:31:07.494851 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:31:08.189839 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:31:08.190597 disk-uuid[559]: The operation has completed successfully. Nov 8 00:31:08.218218 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:31:08.218364 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:31:08.249126 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:31:08.252814 sh[601]: Success Nov 8 00:31:08.267843 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:31:08.304944 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:31:08.323018 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:31:08.329149 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:31:08.340013 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:31:08.340054 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:31:08.340074 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:31:08.341638 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:31:08.342780 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:31:08.348152 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:31:08.349515 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:31:08.356112 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:31:08.359998 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:31:08.369366 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:31:08.369427 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:31:08.369440 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:31:08.373182 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:31:08.383632 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:31:08.386605 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:31:08.396760 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:31:08.404955 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:31:08.466707 ignition[695]: Ignition 2.19.0 Nov 8 00:31:08.466721 ignition[695]: Stage: fetch-offline Nov 8 00:31:08.466764 ignition[695]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:08.466775 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:08.466903 ignition[695]: parsed url from cmdline: "" Nov 8 00:31:08.466907 ignition[695]: no config URL provided Nov 8 00:31:08.466913 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:31:08.466923 ignition[695]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:31:08.466965 ignition[695]: op(1): [started] loading QEMU firmware config module Nov 8 00:31:08.466971 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:31:08.474489 ignition[695]: op(1): [finished] loading QEMU firmware config module Nov 8 00:31:08.507120 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:31:08.521977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:31:08.549403 systemd-networkd[790]: lo: Link UP Nov 8 00:31:08.549414 systemd-networkd[790]: lo: Gained carrier Nov 8 00:31:08.551120 systemd-networkd[790]: Enumeration completed Nov 8 00:31:08.551536 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:31:08.551578 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:31:08.551582 systemd-networkd[790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:31:08.554136 systemd-networkd[790]: eth0: Link UP Nov 8 00:31:08.554141 systemd-networkd[790]: eth0: Gained carrier Nov 8 00:31:08.554155 systemd-networkd[790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:31:08.556645 systemd[1]: Reached target network.target - Network. Nov 8 00:31:08.578956 systemd-networkd[790]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:31:08.579127 ignition[695]: parsing config with SHA512: 22ef785f43a857c8dd058b48ab05b6987ca1366a0056349fe598ed12a147e81139fea3dd99a7af9b749adcd3a1746f7b1cff32b29153788b56195c0294be62ef Nov 8 00:31:08.585359 unknown[695]: fetched base config from "system" Nov 8 00:31:08.585374 unknown[695]: fetched user config from "qemu" Nov 8 00:31:08.586198 ignition[695]: fetch-offline: fetch-offline passed Nov 8 00:31:08.588727 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:31:08.586282 ignition[695]: Ignition finished successfully Nov 8 00:31:08.592438 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:31:08.604151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:31:08.622644 ignition[794]: Ignition 2.19.0 Nov 8 00:31:08.622658 ignition[794]: Stage: kargs Nov 8 00:31:08.622862 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:08.622874 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:08.623774 ignition[794]: kargs: kargs passed Nov 8 00:31:08.623844 ignition[794]: Ignition finished successfully Nov 8 00:31:08.634883 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:31:08.647069 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:31:08.676556 ignition[803]: Ignition 2.19.0 Nov 8 00:31:08.676569 ignition[803]: Stage: disks Nov 8 00:31:08.676737 ignition[803]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:08.676749 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:08.677667 ignition[803]: disks: disks passed Nov 8 00:31:08.677720 ignition[803]: Ignition finished successfully Nov 8 00:31:08.686377 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:31:08.689830 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:31:08.690599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:31:08.694354 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:31:08.698282 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:31:08.701294 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:31:08.715029 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:31:08.732663 systemd-resolved[243]: Detected conflict on linux IN A 10.0.0.145 Nov 8 00:31:08.732683 systemd-resolved[243]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Nov 8 00:31:08.737556 systemd-fsck[814]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:31:08.741306 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:31:08.757908 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:31:08.869833 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:31:08.870118 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:31:08.871974 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:31:08.886987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:31:08.890024 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:31:08.891048 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:31:08.899496 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (822) Nov 8 00:31:08.899535 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:31:08.891099 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:31:08.908425 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:31:08.908479 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:31:08.908494 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:31:08.891128 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:31:08.911031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:31:08.926297 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:31:08.939121 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:31:08.978549 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:31:08.985469 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:31:08.991541 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:31:08.996814 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:31:09.104930 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:31:09.111130 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:31:09.125127 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:31:09.134837 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:31:09.153285 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:31:09.160593 ignition[937]: INFO : Ignition 2.19.0 Nov 8 00:31:09.160593 ignition[937]: INFO : Stage: mount Nov 8 00:31:09.163590 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:09.163590 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:09.167675 ignition[937]: INFO : mount: mount passed Nov 8 00:31:09.167675 ignition[937]: INFO : Ignition finished successfully Nov 8 00:31:09.171036 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:31:09.183019 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:31:09.338577 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:31:09.358137 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:31:09.366851 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (950) Nov 8 00:31:09.366934 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:31:09.370373 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:31:09.370425 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:31:09.374847 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:31:09.376621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:31:09.407133 ignition[967]: INFO : Ignition 2.19.0 Nov 8 00:31:09.407133 ignition[967]: INFO : Stage: files Nov 8 00:31:09.411430 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:09.411430 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:09.411430 ignition[967]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:31:09.411430 ignition[967]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:31:09.411430 ignition[967]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:31:09.421690 ignition[967]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:31:09.421690 ignition[967]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:31:09.421690 ignition[967]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:31:09.421690 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:31:09.421690 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:31:09.416252 unknown[967]: wrote ssh authorized keys file for user: core Nov 8 00:31:09.465174 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:31:09.535505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:31:09.535505 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:31:09.542895 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:31:09.566545 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:31:09.566545 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:31:09.566545 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:31:09.566545 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:31:09.566545 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:31:09.963967 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:31:10.039555 systemd-networkd[790]: eth0: Gained IPv6LL Nov 8 00:31:10.307146 ignition[967]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:31:10.307146 ignition[967]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:31:10.312939 ignition[967]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:31:10.335122 ignition[967]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:31:10.337690 ignition[967]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:31:10.337690 ignition[967]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:31:10.337690 ignition[967]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:31:10.337690 ignition[967]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:31:10.337690 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:31:10.337690 ignition[967]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:31:10.337690 ignition[967]: INFO : files: files passed Nov 8 00:31:10.337690 ignition[967]: INFO : Ignition finished successfully Nov 8 00:31:10.339231 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:31:10.350197 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:31:10.354777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:31:10.368941 initrd-setup-root-after-ignition[995]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:31:10.358705 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:31:10.372750 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:31:10.372750 initrd-setup-root-after-ignition[998]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:31:10.358884 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:31:10.383232 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:31:10.371911 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:31:10.375411 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:31:10.391042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:31:10.420295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:31:10.420460 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:31:10.424158 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:31:10.427530 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:31:10.430868 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:31:10.445052 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:31:10.459746 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:31:10.475959 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:31:10.491291 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:31:10.492336 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:31:10.492894 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:31:10.493139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:31:10.493261 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:31:10.493693 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:31:10.494238 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:31:10.494513 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:31:10.558195 ignition[1022]: INFO : Ignition 2.19.0 Nov 8 00:31:10.558195 ignition[1022]: INFO : Stage: umount Nov 8 00:31:10.558195 ignition[1022]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:31:10.558195 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:31:10.558195 ignition[1022]: INFO : umount: umount passed Nov 8 00:31:10.558195 ignition[1022]: INFO : Ignition finished successfully Nov 8 00:31:10.494785 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:31:10.495334 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:31:10.495623 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:31:10.496160 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:31:10.496447 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:31:10.496718 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:31:10.497256 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:31:10.497511 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:31:10.497624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:31:10.498078 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:31:10.498352 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:31:10.498602 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:31:10.498720 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:31:10.499191 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:31:10.499303 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:31:10.499756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:31:10.499888 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:31:10.500123 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:31:10.500302 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:31:10.504929 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:31:10.505404 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:31:10.505609 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:31:10.505930 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:31:10.506043 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:31:10.506182 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:31:10.506274 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:31:10.506467 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:31:10.506591 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:31:10.506749 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:31:10.506883 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:31:10.542176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:31:10.544630 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:31:10.544792 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:31:10.549557 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:31:10.551371 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:31:10.551633 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:31:10.554868 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:31:10.555023 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:31:10.560980 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:31:10.561106 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:31:10.565922 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:31:10.566047 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:31:10.570218 systemd[1]: Stopped target network.target - Network. Nov 8 00:31:10.572364 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:31:10.572446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:31:10.575914 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:31:10.575978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:31:10.579347 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:31:10.579400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:31:10.582723 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:31:10.582787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:31:10.587468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:31:10.590822 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:31:10.592849 systemd-networkd[790]: eth0: DHCPv6 lease lost Nov 8 00:31:10.595445 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:31:10.596305 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:31:10.596490 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:31:10.600336 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:31:10.600450 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:31:10.617024 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:31:10.620550 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:31:10.620642 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:31:10.624401 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:31:10.628244 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:31:10.628385 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:31:10.678528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:31:10.678622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:31:10.680818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:31:10.680875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:31:10.684537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:31:10.684591 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:31:10.687047 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:31:10.687352 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:31:10.690560 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:31:10.690678 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:31:10.694500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:31:10.694594 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:31:10.696729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:31:10.696776 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:31:10.699916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:31:10.699975 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:31:10.703897 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:31:10.703953 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:31:10.707101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:31:10.707157 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:31:10.723046 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:31:10.725577 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:31:10.817205 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 8 00:31:10.725689 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:31:10.729353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:31:10.729430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:10.733761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:31:10.733902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:31:10.748741 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:31:10.748909 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:31:10.751603 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:31:10.754036 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:31:10.754104 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:31:10.765216 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:31:10.779440 systemd[1]: Switching root. Nov 8 00:31:10.837886 systemd-journald[193]: Journal stopped Nov 8 00:31:12.221980 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:31:12.222077 kernel: SELinux: policy capability open_perms=1 Nov 8 00:31:12.222090 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:31:12.222101 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:31:12.222116 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:31:12.222131 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:31:12.222145 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:31:12.222170 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:31:12.222185 kernel: audit: type=1403 audit(1762561871.294:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:31:12.222203 systemd[1]: Successfully loaded SELinux policy in 45.188ms. Nov 8 00:31:12.222237 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.066ms. Nov 8 00:31:12.222258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:31:12.222273 systemd[1]: Detected virtualization kvm. Nov 8 00:31:12.222288 systemd[1]: Detected architecture x86-64. Nov 8 00:31:12.222301 systemd[1]: Detected first boot. Nov 8 00:31:12.222313 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:31:12.222325 zram_generator::config[1066]: No configuration found. Nov 8 00:31:12.222338 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:31:12.222354 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:31:12.222377 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:31:12.222389 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:31:12.222402 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:31:12.222414 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:31:12.222428 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:31:12.222443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:31:12.222459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:31:12.222475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:31:12.222492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:31:12.222505 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:31:12.222517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:31:12.222529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:31:12.222541 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:31:12.222554 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:31:12.222567 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:31:12.222579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:31:12.222594 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:31:12.222606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:31:12.222618 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:31:12.222631 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:31:12.222643 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:31:12.222656 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:31:12.222668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:31:12.222680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:31:12.222701 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:31:12.222714 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:31:12.222726 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:31:12.222738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:31:12.222750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:31:12.222766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:31:12.222778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:31:12.222821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:31:12.222843 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:31:12.222866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:31:12.222882 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:31:12.222898 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:12.222917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:31:12.222933 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:31:12.222950 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:31:12.222966 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:31:12.222982 systemd[1]: Reached target machines.target - Containers. Nov 8 00:31:12.222998 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:31:12.223020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:31:12.223036 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:31:12.223177 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:31:12.223223 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:31:12.223262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:31:12.223281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:31:12.223296 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:31:12.223311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:31:12.223331 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:31:12.223346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:31:12.223373 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:31:12.223390 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:31:12.223406 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:31:12.223422 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:31:12.223436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:31:12.223479 systemd-journald[1129]: Collecting audit messages is disabled. Nov 8 00:31:12.223519 systemd-journald[1129]: Journal started Nov 8 00:31:12.223547 systemd-journald[1129]: Runtime Journal (/run/log/journal/be70fccd568f4d81b37e19d20a64a313) is 6.0M, max 48.3M, 42.2M free. Nov 8 00:31:11.902677 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:31:11.921786 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:31:11.922298 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:31:12.228822 kernel: fuse: init (API version 7.39) Nov 8 00:31:12.235396 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:31:12.235471 kernel: loop: module loaded Nov 8 00:31:12.240121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:31:12.248208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:31:12.248274 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:31:12.250827 systemd[1]: Stopped verity-setup.service. Nov 8 00:31:12.250878 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:12.257963 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:31:12.261204 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:31:12.263631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:31:12.265611 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:31:12.267429 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:31:12.269495 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:31:12.271594 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:31:12.273655 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:31:12.276642 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:31:12.277072 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:31:12.278824 kernel: ACPI: bus type drm_connector registered Nov 8 00:31:12.280702 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:31:12.281035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:31:12.283529 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:31:12.283762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:31:12.285951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:31:12.286179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:31:12.288581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:31:12.288799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:31:12.291004 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:31:12.291220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:31:12.293652 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:31:12.295819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:31:12.298134 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:31:12.313702 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:31:12.324018 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:31:12.327528 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:31:12.351079 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:31:12.351157 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:31:12.355635 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:31:12.359416 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:31:12.366538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:31:12.368404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:31:12.372214 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:31:12.379978 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:31:12.388051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:31:12.389744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:31:12.390519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:31:12.392957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:31:12.397645 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:31:12.400367 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:31:12.403445 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:31:12.405820 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:31:12.420407 systemd-journald[1129]: Time spent on flushing to /var/log/journal/be70fccd568f4d81b37e19d20a64a313 is 35.653ms for 995 entries. Nov 8 00:31:12.420407 systemd-journald[1129]: System Journal (/var/log/journal/be70fccd568f4d81b37e19d20a64a313) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:31:12.784432 systemd-journald[1129]: Received client request to flush runtime journal. Nov 8 00:31:12.784499 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:31:12.784552 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:31:12.784605 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 00:31:12.784649 kernel: loop2: detected capacity change from 0 to 140768 Nov 8 00:31:12.784696 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:31:12.784725 kernel: loop4: detected capacity change from 0 to 224512 Nov 8 00:31:12.447825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:31:12.503997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:31:12.516002 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:31:12.529767 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:31:12.532643 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:31:12.545147 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:31:12.551970 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:31:12.556518 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:31:12.568712 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:31:12.745157 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:31:12.756013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:31:12.787596 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:31:12.797862 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:31:12.803689 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 8 00:31:12.803708 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Nov 8 00:31:12.811252 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:31:12.817033 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:31:12.817658 (sd-merge)[1199]: Merged extensions into '/usr'. Nov 8 00:31:12.822673 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:31:12.822846 systemd[1]: Reloading... Nov 8 00:31:12.904100 zram_generator::config[1231]: No configuration found. Nov 8 00:31:12.976104 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:31:13.023509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:31:13.076046 systemd[1]: Reloading finished in 252 ms. Nov 8 00:31:13.130058 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:31:13.152033 systemd[1]: Starting ensure-sysext.service... Nov 8 00:31:13.162104 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:31:13.167371 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:31:13.171126 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:31:13.171143 systemd[1]: Reloading... Nov 8 00:31:13.212621 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:31:13.213021 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:31:13.214043 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:31:13.214335 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Nov 8 00:31:13.214414 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Nov 8 00:31:13.218576 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:31:13.218676 systemd-tmpfiles[1267]: Skipping /boot Nov 8 00:31:13.222217 zram_generator::config[1296]: No configuration found. Nov 8 00:31:13.230641 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:31:13.230747 systemd-tmpfiles[1267]: Skipping /boot Nov 8 00:31:13.331915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:31:13.382459 systemd[1]: Reloading finished in 210 ms. Nov 8 00:31:13.405978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:31:13.407542 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:31:13.410191 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:31:13.422411 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:31:13.433214 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:31:13.437963 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:31:13.441473 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:31:13.448044 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:31:13.451363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:31:13.455112 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:31:13.459416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:13.459589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:31:13.463871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:31:13.467147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:31:13.471372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:31:13.473993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:31:13.477239 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:31:13.479877 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:13.480981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:31:13.481195 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:31:13.485508 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:31:13.488590 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:31:13.488783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:31:13.492252 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Nov 8 00:31:13.497515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:31:13.497968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:31:13.505305 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:31:13.508671 augenrules[1365]: No rules Nov 8 00:31:13.511112 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:31:13.516235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:13.516464 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:31:13.528113 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:31:13.536956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:31:13.539009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:31:13.543562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:31:13.545686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:31:13.547966 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:31:13.549692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:31:13.550626 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:31:13.552850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:31:13.555699 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:31:13.558323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:31:13.558509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:31:13.561141 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:31:13.561330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:31:13.563795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:31:13.563992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:31:13.566510 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:31:13.566701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:31:13.573360 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:31:13.575881 systemd[1]: Finished ensure-sysext.service. Nov 8 00:31:13.595841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1392) Nov 8 00:31:13.599077 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:31:13.602992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:31:13.603075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:31:13.613089 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:31:13.614481 systemd-resolved[1343]: Positive Trust Anchors: Nov 8 00:31:13.614499 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:31:13.614532 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:31:13.618103 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:31:13.618642 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:31:13.620540 systemd-resolved[1343]: Defaulting to hostname 'linux'. Nov 8 00:31:13.623178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:31:13.627429 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:31:13.668189 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:31:13.671839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:31:13.678827 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:31:13.679056 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:31:13.697843 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:31:13.697956 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 8 00:31:13.699540 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:31:13.700062 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:31:13.700857 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:31:13.706888 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:31:13.727374 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:31:13.732735 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:31:13.739427 systemd-networkd[1407]: lo: Link UP Nov 8 00:31:13.739436 systemd-networkd[1407]: lo: Gained carrier Nov 8 00:31:13.742790 systemd-networkd[1407]: Enumeration completed Nov 8 00:31:13.745287 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:31:13.745425 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:31:13.745913 systemd-networkd[1407]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:31:13.748977 systemd-networkd[1407]: eth0: Link UP Nov 8 00:31:13.749124 systemd[1]: Reached target network.target - Network. Nov 8 00:31:13.750208 systemd-networkd[1407]: eth0: Gained carrier Nov 8 00:31:13.750536 systemd-networkd[1407]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:31:13.763599 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:31:13.767845 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:31:13.769124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:13.779608 systemd-networkd[1407]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:31:13.782566 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Nov 8 00:31:13.784621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:31:13.784847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:14.446358 systemd-timesyncd[1409]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:31:14.446399 systemd-timesyncd[1409]: Initial clock synchronization to Sat 2025-11-08 00:31:14.446259 UTC. Nov 8 00:31:14.446432 systemd-resolved[1343]: Clock change detected. Flushing caches. Nov 8 00:31:14.482041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:31:14.503990 kernel: kvm_amd: TSC scaling supported Nov 8 00:31:14.504065 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:31:14.504104 kernel: kvm_amd: Nested Paging enabled Nov 8 00:31:14.504128 kernel: kvm_amd: LBR virtualization supported Nov 8 00:31:14.504978 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:31:14.505014 kernel: kvm_amd: Virtual GIF supported Nov 8 00:31:14.530194 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:31:14.554692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:31:14.565372 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:31:14.579209 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:31:14.591431 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:31:14.623410 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:31:14.625814 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:31:14.627651 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:31:14.629506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:31:14.631548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:31:14.633845 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:31:14.635693 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:31:14.637760 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:31:14.639808 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:31:14.639838 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:31:14.641291 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:31:14.643837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:31:14.647417 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:31:14.656892 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:31:14.660262 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:31:14.662716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:31:14.664617 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:31:14.666193 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:31:14.667911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:31:14.667945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:31:14.669248 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:31:14.671108 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:31:14.672461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:31:14.676836 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:31:14.681096 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:31:14.686128 jq[1445]: false Nov 8 00:31:14.683003 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:31:14.684318 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:31:14.689155 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:31:14.693236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:31:14.697239 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:31:14.704580 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:31:14.706815 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:31:14.707273 dbus-daemon[1444]: [system] SELinux support is enabled Nov 8 00:31:14.707331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:31:14.708141 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:31:14.711425 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:31:14.715296 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:31:14.720570 jq[1461]: true Nov 8 00:31:14.720933 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:31:14.725496 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:31:14.726091 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:31:14.726445 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:31:14.727047 update_engine[1459]: I20251108 00:31:14.726918 1459 main.cc:92] Flatcar Update Engine starting Nov 8 00:31:14.727056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:31:14.728258 update_engine[1459]: I20251108 00:31:14.728224 1459 update_check_scheduler.cc:74] Next update check in 2m6s Nov 8 00:31:14.731228 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:31:14.731499 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:31:14.734474 extend-filesystems[1446]: Found loop3 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found loop4 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found loop5 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found sr0 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda1 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda2 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda3 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found usr Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda4 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda6 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda7 Nov 8 00:31:14.740008 extend-filesystems[1446]: Found vda9 Nov 8 00:31:14.740008 extend-filesystems[1446]: Checking size of /dev/vda9 Nov 8 00:31:14.758665 extend-filesystems[1446]: Resized partition /dev/vda9 Nov 8 00:31:14.760169 jq[1465]: true Nov 8 00:31:14.761860 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:31:14.770751 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:31:14.773498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:31:14.781666 tar[1464]: linux-amd64/LICENSE Nov 8 00:31:14.781666 tar[1464]: linux-amd64/helm Nov 8 00:31:14.774047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:31:14.778249 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:31:14.778268 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:31:14.791370 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:31:14.794756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1378) Nov 8 00:31:14.793973 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:31:14.801205 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:31:14.799197 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:31:14.826868 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:31:14.826907 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:31:14.827473 systemd-logind[1456]: New seat seat0. Nov 8 00:31:14.827620 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:31:14.827620 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:31:14.827620 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:31:14.848236 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Nov 8 00:31:14.852159 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:31:14.831411 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:31:14.831652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:31:14.834729 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:31:14.848148 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:31:14.853535 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:31:14.867091 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:31:15.002967 containerd[1494]: time="2025-11-08T00:31:15.002567970Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:31:15.009764 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:31:15.026225 containerd[1494]: time="2025-11-08T00:31:15.026121132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.027720 containerd[1494]: time="2025-11-08T00:31:15.027693390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:31:15.027978 containerd[1494]: time="2025-11-08T00:31:15.027767359Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:31:15.027978 containerd[1494]: time="2025-11-08T00:31:15.027785553Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:31:15.028081 containerd[1494]: time="2025-11-08T00:31:15.028063073Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:31:15.028202 containerd[1494]: time="2025-11-08T00:31:15.028188358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.028352 containerd[1494]: time="2025-11-08T00:31:15.028329252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:31:15.028412 containerd[1494]: time="2025-11-08T00:31:15.028399123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.028692 containerd[1494]: time="2025-11-08T00:31:15.028671143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:31:15.028750 containerd[1494]: time="2025-11-08T00:31:15.028738039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029199 containerd[1494]: time="2025-11-08T00:31:15.028787652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029199 containerd[1494]: time="2025-11-08T00:31:15.028800486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029199 containerd[1494]: time="2025-11-08T00:31:15.028894853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029199 containerd[1494]: time="2025-11-08T00:31:15.029160361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029430 containerd[1494]: time="2025-11-08T00:31:15.029412253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:31:15.029493 containerd[1494]: time="2025-11-08T00:31:15.029480310Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:31:15.029628 containerd[1494]: time="2025-11-08T00:31:15.029613049Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:31:15.029737 containerd[1494]: time="2025-11-08T00:31:15.029722154Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:31:15.033268 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:31:15.035988 containerd[1494]: time="2025-11-08T00:31:15.035965089Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:31:15.036024 containerd[1494]: time="2025-11-08T00:31:15.036009042Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:31:15.036045 containerd[1494]: time="2025-11-08T00:31:15.036023238Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:31:15.036045 containerd[1494]: time="2025-11-08T00:31:15.036038016Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:31:15.036097 containerd[1494]: time="2025-11-08T00:31:15.036056962Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036197175Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036414702Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036546850Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036562239Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036574853Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036587486Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036599168Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036610610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036623484Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036636969Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036653490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036665032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036675872Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:31:15.037976 containerd[1494]: time="2025-11-08T00:31:15.036700408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036713693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036726096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036742507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036755441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036768586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036779486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036791078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036805124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036818439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036830222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036841833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036852824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036867141Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036885064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038239 containerd[1494]: time="2025-11-08T00:31:15.036895764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.036905413Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.036977448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.036995271Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037006282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037017052Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037026079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037037620Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037053891Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:31:15.038530 containerd[1494]: time="2025-11-08T00:31:15.037072606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:31:15.038694 containerd[1494]: time="2025-11-08T00:31:15.037373450Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:31:15.038694 containerd[1494]: time="2025-11-08T00:31:15.037425618Z" level=info msg="Connect containerd service" Nov 8 00:31:15.038694 containerd[1494]: time="2025-11-08T00:31:15.037472426Z" level=info msg="using legacy CRI server" Nov 8 00:31:15.038694 containerd[1494]: time="2025-11-08T00:31:15.037482655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:31:15.038694 containerd[1494]: time="2025-11-08T00:31:15.037585929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:31:15.039307 containerd[1494]: time="2025-11-08T00:31:15.039272982Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:31:15.039824 containerd[1494]: time="2025-11-08T00:31:15.039764754Z" level=info msg="Start subscribing containerd event" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040082379Z" level=info msg="Start recovering state" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040108218Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040161287Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040162670Z" level=info msg="Start event monitor" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040205671Z" level=info msg="Start snapshots syncer" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040232791Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040241798Z" level=info msg="Start streaming server" Nov 8 00:31:15.040793 containerd[1494]: time="2025-11-08T00:31:15.040304295Z" level=info msg="containerd successfully booted in 0.038823s" Nov 8 00:31:15.041692 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:31:15.042503 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:31:15.050766 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:31:15.051029 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:31:15.054436 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:31:15.070475 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:31:15.080367 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:31:15.083470 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:31:15.085488 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:31:15.252900 tar[1464]: linux-amd64/README.md Nov 8 00:31:15.268310 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:31:15.620116 systemd-networkd[1407]: eth0: Gained IPv6LL Nov 8 00:31:15.623223 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:31:15.625984 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:31:15.641227 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:31:15.645085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:15.648334 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:31:15.670706 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:31:15.670964 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:31:15.673301 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:31:15.675872 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:31:16.467827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:16.470290 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:31:16.472317 systemd[1]: Startup finished in 1.360s (kernel) + 5.555s (initrd) + 4.568s (userspace) = 11.484s. Nov 8 00:31:16.472800 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:31:16.960110 kubelet[1557]: E1108 00:31:16.960028 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:31:16.964700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:31:16.964930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:31:16.965343 systemd[1]: kubelet.service: Consumed 1.177s CPU time. Nov 8 00:31:19.698409 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:31:19.699925 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:54914.service - OpenSSH per-connection server daemon (10.0.0.1:54914). Nov 8 00:31:19.761769 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 54914 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:19.764436 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:19.774180 systemd-logind[1456]: New session 1 of user core. Nov 8 00:31:19.775527 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:31:19.788200 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:31:19.807863 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:31:19.826401 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:31:19.831787 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:31:19.961150 systemd[1574]: Queued start job for default target default.target. Nov 8 00:31:19.979612 systemd[1574]: Created slice app.slice - User Application Slice. Nov 8 00:31:19.979645 systemd[1574]: Reached target paths.target - Paths. Nov 8 00:31:19.979660 systemd[1574]: Reached target timers.target - Timers. Nov 8 00:31:19.981436 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:31:19.993850 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:31:19.994027 systemd[1574]: Reached target sockets.target - Sockets. Nov 8 00:31:19.994049 systemd[1574]: Reached target basic.target - Basic System. Nov 8 00:31:19.994094 systemd[1574]: Reached target default.target - Main User Target. Nov 8 00:31:19.994132 systemd[1574]: Startup finished in 150ms. Nov 8 00:31:19.994587 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:31:19.996464 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:31:20.057969 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:54928.service - OpenSSH per-connection server daemon (10.0.0.1:54928). Nov 8 00:31:20.101902 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 54928 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.104071 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.109121 systemd-logind[1456]: New session 2 of user core. Nov 8 00:31:20.123094 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:31:20.184788 sshd[1585]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.200923 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:54928.service: Deactivated successfully. Nov 8 00:31:20.202684 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:31:20.204220 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:31:20.205526 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Nov 8 00:31:20.206436 systemd-logind[1456]: Removed session 2. Nov 8 00:31:20.257428 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.259088 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.263311 systemd-logind[1456]: New session 3 of user core. Nov 8 00:31:20.273097 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:31:20.324316 sshd[1592]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.336913 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:54930.service: Deactivated successfully. Nov 8 00:31:20.338814 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:31:20.340490 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:31:20.341817 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:54938.service - OpenSSH per-connection server daemon (10.0.0.1:54938). Nov 8 00:31:20.342603 systemd-logind[1456]: Removed session 3. Nov 8 00:31:20.379430 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 54938 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.380996 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.384696 systemd-logind[1456]: New session 4 of user core. Nov 8 00:31:20.406075 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:31:20.460592 sshd[1599]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.469850 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:54938.service: Deactivated successfully. Nov 8 00:31:20.471731 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:31:20.473414 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:31:20.474799 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Nov 8 00:31:20.475679 systemd-logind[1456]: Removed session 4. Nov 8 00:31:20.524489 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.526128 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.530591 systemd-logind[1456]: New session 5 of user core. Nov 8 00:31:20.541086 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:31:20.601144 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:31:20.601511 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:31:20.624519 sudo[1609]: pam_unix(sudo:session): session closed for user root Nov 8 00:31:20.626945 sshd[1606]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.643435 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:54944.service: Deactivated successfully. Nov 8 00:31:20.645336 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:31:20.647133 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:31:20.656201 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:54952.service - OpenSSH per-connection server daemon (10.0.0.1:54952). Nov 8 00:31:20.657287 systemd-logind[1456]: Removed session 5. Nov 8 00:31:20.691520 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 54952 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.693183 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.698445 systemd-logind[1456]: New session 6 of user core. Nov 8 00:31:20.708089 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:31:20.763040 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:31:20.763412 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:31:20.767457 sudo[1618]: pam_unix(sudo:session): session closed for user root Nov 8 00:31:20.774416 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:31:20.774762 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:31:20.794177 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:31:20.795934 auditctl[1621]: No rules Nov 8 00:31:20.796417 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:31:20.796635 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:31:20.799202 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:31:20.829755 augenrules[1639]: No rules Nov 8 00:31:20.831601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:31:20.832834 sudo[1617]: pam_unix(sudo:session): session closed for user root Nov 8 00:31:20.834676 sshd[1614]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.842195 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:54952.service: Deactivated successfully. Nov 8 00:31:20.843798 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:31:20.845396 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:31:20.854196 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:54956.service - OpenSSH per-connection server daemon (10.0.0.1:54956). Nov 8 00:31:20.855025 systemd-logind[1456]: Removed session 6. Nov 8 00:31:20.888024 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 54956 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:31:20.889746 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.893727 systemd-logind[1456]: New session 7 of user core. Nov 8 00:31:20.908189 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:31:20.962710 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:31:20.963077 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:31:21.276360 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:31:21.276498 (dockerd)[1668]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:31:21.553196 dockerd[1668]: time="2025-11-08T00:31:21.553028932Z" level=info msg="Starting up" Nov 8 00:31:21.929299 dockerd[1668]: time="2025-11-08T00:31:21.929221283Z" level=info msg="Loading containers: start." Nov 8 00:31:22.045993 kernel: Initializing XFRM netlink socket Nov 8 00:31:22.125433 systemd-networkd[1407]: docker0: Link UP Nov 8 00:31:22.148993 dockerd[1668]: time="2025-11-08T00:31:22.148926410Z" level=info msg="Loading containers: done." Nov 8 00:31:22.164055 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2155227637-merged.mount: Deactivated successfully. Nov 8 00:31:22.166355 dockerd[1668]: time="2025-11-08T00:31:22.166309433Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:31:22.166442 dockerd[1668]: time="2025-11-08T00:31:22.166423136Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:31:22.166587 dockerd[1668]: time="2025-11-08T00:31:22.166564672Z" level=info msg="Daemon has completed initialization" Nov 8 00:31:22.205306 dockerd[1668]: time="2025-11-08T00:31:22.205145851Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:31:22.205388 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:31:22.966312 containerd[1494]: time="2025-11-08T00:31:22.966254169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:31:23.645168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538133270.mount: Deactivated successfully. Nov 8 00:31:24.585426 containerd[1494]: time="2025-11-08T00:31:24.585353961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:24.586460 containerd[1494]: time="2025-11-08T00:31:24.586429107Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:31:24.587891 containerd[1494]: time="2025-11-08T00:31:24.587820887Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:24.590654 containerd[1494]: time="2025-11-08T00:31:24.590623762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:24.591724 containerd[1494]: time="2025-11-08T00:31:24.591701092Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.625401838s" Nov 8 00:31:24.591757 containerd[1494]: time="2025-11-08T00:31:24.591729415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:31:24.592477 containerd[1494]: time="2025-11-08T00:31:24.592443925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:31:25.766137 containerd[1494]: time="2025-11-08T00:31:25.766064142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:25.766929 containerd[1494]: time="2025-11-08T00:31:25.766877758Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:31:25.768233 containerd[1494]: time="2025-11-08T00:31:25.768201219Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:25.772305 containerd[1494]: time="2025-11-08T00:31:25.772257906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:25.776192 containerd[1494]: time="2025-11-08T00:31:25.776120859Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.183633883s" Nov 8 00:31:25.776192 containerd[1494]: time="2025-11-08T00:31:25.776168989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:31:25.776873 containerd[1494]: time="2025-11-08T00:31:25.776819749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:31:27.077801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:31:27.081348 containerd[1494]: time="2025-11-08T00:31:27.081281525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:27.087619 containerd[1494]: time="2025-11-08T00:31:27.081988049Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:31:27.087619 containerd[1494]: time="2025-11-08T00:31:27.083224658Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:27.087619 containerd[1494]: time="2025-11-08T00:31:27.086245582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:27.087619 containerd[1494]: time="2025-11-08T00:31:27.087163113Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.310297598s" Nov 8 00:31:27.087619 containerd[1494]: time="2025-11-08T00:31:27.087187829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:31:27.087298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:27.088014 containerd[1494]: time="2025-11-08T00:31:27.088000092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:31:27.253778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:27.258647 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:31:27.309148 kubelet[1891]: E1108 00:31:27.309078 1891 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:31:27.316412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:31:27.316651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:31:28.469658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706721625.mount: Deactivated successfully. Nov 8 00:31:30.122980 containerd[1494]: time="2025-11-08T00:31:30.122853972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.123636 containerd[1494]: time="2025-11-08T00:31:30.123514481Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:31:30.124740 containerd[1494]: time="2025-11-08T00:31:30.124691377Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.126672 containerd[1494]: time="2025-11-08T00:31:30.126638538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:30.127264 containerd[1494]: time="2025-11-08T00:31:30.127232372Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.039208936s" Nov 8 00:31:30.127307 containerd[1494]: time="2025-11-08T00:31:30.127263460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:31:30.127749 containerd[1494]: time="2025-11-08T00:31:30.127722831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:31:30.697254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118165228.mount: Deactivated successfully. Nov 8 00:31:31.604493 containerd[1494]: time="2025-11-08T00:31:31.604418746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:31.605141 containerd[1494]: time="2025-11-08T00:31:31.605095274Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:31:31.606168 containerd[1494]: time="2025-11-08T00:31:31.606135625Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:31.609094 containerd[1494]: time="2025-11-08T00:31:31.609043227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:31.610405 containerd[1494]: time="2025-11-08T00:31:31.610363282Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.482608731s" Nov 8 00:31:31.610405 containerd[1494]: time="2025-11-08T00:31:31.610394921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:31:31.611067 containerd[1494]: time="2025-11-08T00:31:31.611030833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:31:32.116255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474214672.mount: Deactivated successfully. Nov 8 00:31:32.122722 containerd[1494]: time="2025-11-08T00:31:32.122644659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:32.123286 containerd[1494]: time="2025-11-08T00:31:32.123219968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:31:32.124538 containerd[1494]: time="2025-11-08T00:31:32.124502252Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:32.129506 containerd[1494]: time="2025-11-08T00:31:32.129467412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:32.130269 containerd[1494]: time="2025-11-08T00:31:32.130224231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 519.165345ms" Nov 8 00:31:32.130269 containerd[1494]: time="2025-11-08T00:31:32.130261551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:31:32.130691 containerd[1494]: time="2025-11-08T00:31:32.130664176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:31:32.684753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665374688.mount: Deactivated successfully. Nov 8 00:31:34.661213 containerd[1494]: time="2025-11-08T00:31:34.661144145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.661883 containerd[1494]: time="2025-11-08T00:31:34.661803020Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:31:34.663015 containerd[1494]: time="2025-11-08T00:31:34.662977543Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.665880 containerd[1494]: time="2025-11-08T00:31:34.665821064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:34.667148 containerd[1494]: time="2025-11-08T00:31:34.667106905Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.536410269s" Nov 8 00:31:34.667148 containerd[1494]: time="2025-11-08T00:31:34.667142532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:31:36.829631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:36.840159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:36.869489 systemd[1]: Reloading requested from client PID 2048 ('systemctl') (unit session-7.scope)... Nov 8 00:31:36.869507 systemd[1]: Reloading... Nov 8 00:31:36.966996 zram_generator::config[2090]: No configuration found. Nov 8 00:31:37.280203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:31:37.359502 systemd[1]: Reloading finished in 489 ms. Nov 8 00:31:37.417476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:31:37.417611 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:31:37.418024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:37.419810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:37.594489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:37.600248 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:31:37.640828 kubelet[2136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:37.640828 kubelet[2136]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:31:37.640828 kubelet[2136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:37.641371 kubelet[2136]: I1108 00:31:37.640923 2136 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:31:38.028874 kubelet[2136]: I1108 00:31:38.028731 2136 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:31:38.028874 kubelet[2136]: I1108 00:31:38.028770 2136 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:31:38.029084 kubelet[2136]: I1108 00:31:38.029062 2136 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:31:38.051563 kubelet[2136]: E1108 00:31:38.051492 2136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:38.052332 kubelet[2136]: I1108 00:31:38.052283 2136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:31:38.059908 kubelet[2136]: E1108 00:31:38.059865 2136 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:31:38.059908 kubelet[2136]: I1108 00:31:38.059900 2136 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:31:38.065402 kubelet[2136]: I1108 00:31:38.065368 2136 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:31:38.067332 kubelet[2136]: I1108 00:31:38.067284 2136 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:31:38.067490 kubelet[2136]: I1108 00:31:38.067322 2136 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:31:38.067490 kubelet[2136]: I1108 00:31:38.067490 2136 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:31:38.067611 kubelet[2136]: I1108 00:31:38.067500 2136 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:31:38.067729 kubelet[2136]: I1108 00:31:38.067697 2136 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:38.070184 kubelet[2136]: I1108 00:31:38.070158 2136 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:31:38.070223 kubelet[2136]: I1108 00:31:38.070191 2136 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:31:38.070223 kubelet[2136]: I1108 00:31:38.070211 2136 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:31:38.070223 kubelet[2136]: I1108 00:31:38.070221 2136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:31:38.074090 kubelet[2136]: I1108 00:31:38.073533 2136 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:31:38.076225 kubelet[2136]: W1108 00:31:38.076188 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:38.076277 kubelet[2136]: E1108 00:31:38.076235 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:38.076477 kubelet[2136]: I1108 00:31:38.076445 2136 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:31:38.077310 kubelet[2136]: W1108 00:31:38.077246 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:38.077442 kubelet[2136]: E1108 00:31:38.077328 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:38.079781 kubelet[2136]: W1108 00:31:38.079707 2136 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:31:38.081871 kubelet[2136]: I1108 00:31:38.081838 2136 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:31:38.081912 kubelet[2136]: I1108 00:31:38.081878 2136 server.go:1287] "Started kubelet" Nov 8 00:31:38.082872 kubelet[2136]: I1108 00:31:38.082318 2136 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:31:38.083304 kubelet[2136]: I1108 00:31:38.083278 2136 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:31:38.084144 kubelet[2136]: I1108 00:31:38.083487 2136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:31:38.084144 kubelet[2136]: I1108 00:31:38.083837 2136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:31:38.084144 kubelet[2136]: I1108 00:31:38.084058 2136 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:31:38.084759 kubelet[2136]: I1108 00:31:38.084738 2136 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:31:38.085906 kubelet[2136]: E1108 00:31:38.085775 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:38.085906 kubelet[2136]: I1108 00:31:38.085827 2136 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:31:38.085998 kubelet[2136]: I1108 00:31:38.085985 2136 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:31:38.086050 kubelet[2136]: I1108 00:31:38.086033 2136 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:31:38.086407 kubelet[2136]: W1108 00:31:38.086365 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:38.086443 kubelet[2136]: E1108 00:31:38.086411 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:38.086925 kubelet[2136]: I1108 00:31:38.086680 2136 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:31:38.086925 kubelet[2136]: I1108 00:31:38.086758 2136 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:31:38.087368 kubelet[2136]: E1108 00:31:38.087341 2136 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:31:38.087690 kubelet[2136]: I1108 00:31:38.087672 2136 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:31:38.089620 kubelet[2136]: E1108 00:31:38.088323 2136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875e0b535ceab5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:31:38.081856349 +0000 UTC m=+0.477474185,LastTimestamp:2025-11-08 00:31:38.081856349 +0000 UTC m=+0.477474185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:31:38.090006 kubelet[2136]: E1108 00:31:38.089946 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Nov 8 00:31:38.101965 kubelet[2136]: I1108 00:31:38.101905 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:31:38.101965 kubelet[2136]: I1108 00:31:38.101944 2136 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:31:38.102064 kubelet[2136]: I1108 00:31:38.101976 2136 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:31:38.102064 kubelet[2136]: I1108 00:31:38.101993 2136 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:38.103486 kubelet[2136]: I1108 00:31:38.103446 2136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:31:38.103486 kubelet[2136]: I1108 00:31:38.103479 2136 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:31:38.103554 kubelet[2136]: I1108 00:31:38.103501 2136 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:31:38.103554 kubelet[2136]: I1108 00:31:38.103510 2136 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:31:38.103607 kubelet[2136]: E1108 00:31:38.103560 2136 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:31:38.186172 kubelet[2136]: E1108 00:31:38.186120 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:38.204495 kubelet[2136]: E1108 00:31:38.204461 2136 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:31:38.286857 kubelet[2136]: E1108 00:31:38.286709 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:38.291477 kubelet[2136]: E1108 00:31:38.291430 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Nov 8 00:31:38.387734 kubelet[2136]: E1108 00:31:38.387642 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:38.405185 kubelet[2136]: E1108 00:31:38.405118 2136 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:31:38.488689 kubelet[2136]: E1108 00:31:38.488614 2136 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:38.490512 kubelet[2136]: I1108 00:31:38.490460 2136 policy_none.go:49] "None policy: Start" Nov 8 00:31:38.490512 kubelet[2136]: I1108 00:31:38.490493 2136 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:31:38.490512 kubelet[2136]: I1108 00:31:38.490510 2136 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:31:38.490892 kubelet[2136]: W1108 00:31:38.490812 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:38.490937 kubelet[2136]: E1108 00:31:38.490893 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:38.496599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:31:38.516809 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:31:38.520223 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:31:38.532160 kubelet[2136]: I1108 00:31:38.531996 2136 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:31:38.532247 kubelet[2136]: I1108 00:31:38.532229 2136 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:31:38.532291 kubelet[2136]: I1108 00:31:38.532243 2136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:31:38.532542 kubelet[2136]: I1108 00:31:38.532512 2136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:31:38.564845 kubelet[2136]: E1108 00:31:38.533374 2136 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:31:38.564845 kubelet[2136]: E1108 00:31:38.564818 2136 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:31:38.634054 kubelet[2136]: I1108 00:31:38.633998 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:38.634554 kubelet[2136]: E1108 00:31:38.634501 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Nov 8 00:31:38.692230 kubelet[2136]: E1108 00:31:38.692182 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Nov 8 00:31:38.814829 systemd[1]: Created slice kubepods-burstable-pod182542cf907c12189406c5abc78bb4f2.slice - libcontainer container kubepods-burstable-pod182542cf907c12189406c5abc78bb4f2.slice. Nov 8 00:31:38.833688 kubelet[2136]: E1108 00:31:38.833645 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:38.835645 kubelet[2136]: I1108 00:31:38.835620 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:38.836056 kubelet[2136]: E1108 00:31:38.836017 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Nov 8 00:31:38.836836 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 8 00:31:38.839395 kubelet[2136]: E1108 00:31:38.839360 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:38.842050 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 8 00:31:38.844875 kubelet[2136]: E1108 00:31:38.844846 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:38.892464 kubelet[2136]: I1108 00:31:38.892405 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:38.892464 kubelet[2136]: I1108 00:31:38.892442 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:38.892464 kubelet[2136]: I1108 00:31:38.892472 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:38.892464 kubelet[2136]: I1108 00:31:38.892491 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:38.892780 kubelet[2136]: I1108 00:31:38.892505 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:38.892780 kubelet[2136]: I1108 00:31:38.892521 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:38.892780 kubelet[2136]: I1108 00:31:38.892536 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:38.892780 kubelet[2136]: I1108 00:31:38.892551 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:38.892780 kubelet[2136]: I1108 00:31:38.892572 2136 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:38.899182 kubelet[2136]: W1108 00:31:38.899142 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:38.899236 kubelet[2136]: E1108 00:31:38.899189 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:39.135071 kubelet[2136]: E1108 00:31:39.135023 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:39.135724 containerd[1494]: time="2025-11-08T00:31:39.135655943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:182542cf907c12189406c5abc78bb4f2,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:39.139831 kubelet[2136]: E1108 00:31:39.139792 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:39.140245 containerd[1494]: time="2025-11-08T00:31:39.140195896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:39.145478 kubelet[2136]: E1108 00:31:39.145443 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:39.145883 containerd[1494]: time="2025-11-08T00:31:39.145840369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:39.247020 kubelet[2136]: I1108 00:31:39.246978 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:39.247419 kubelet[2136]: E1108 00:31:39.247389 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Nov 8 00:31:39.376748 kubelet[2136]: W1108 00:31:39.376656 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:39.377189 kubelet[2136]: E1108 00:31:39.376754 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:39.493629 kubelet[2136]: E1108 00:31:39.493512 2136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Nov 8 00:31:39.681512 kubelet[2136]: W1108 00:31:39.681446 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:39.681594 kubelet[2136]: E1108 00:31:39.681516 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:39.732541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400471583.mount: Deactivated successfully. Nov 8 00:31:39.739498 containerd[1494]: time="2025-11-08T00:31:39.739443765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:39.740379 containerd[1494]: time="2025-11-08T00:31:39.740341950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:39.741135 containerd[1494]: time="2025-11-08T00:31:39.741075826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:31:39.742075 containerd[1494]: time="2025-11-08T00:31:39.742044131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:39.743041 containerd[1494]: time="2025-11-08T00:31:39.742966410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:31:39.744023 containerd[1494]: time="2025-11-08T00:31:39.743895222Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:39.744934 containerd[1494]: time="2025-11-08T00:31:39.744892642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:31:39.748049 containerd[1494]: time="2025-11-08T00:31:39.748013283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:31:39.750124 containerd[1494]: time="2025-11-08T00:31:39.750087522Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 614.33591ms" Nov 8 00:31:39.750934 containerd[1494]: time="2025-11-08T00:31:39.750908862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 604.911599ms" Nov 8 00:31:39.751777 containerd[1494]: time="2025-11-08T00:31:39.751710175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 611.416817ms" Nov 8 00:31:39.999142 kubelet[2136]: W1108 00:31:39.998878 2136 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.145:6443: connect: connection refused Nov 8 00:31:39.999142 kubelet[2136]: E1108 00:31:39.998987 2136 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:40.021312 containerd[1494]: time="2025-11-08T00:31:40.021214616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:40.021475 containerd[1494]: time="2025-11-08T00:31:40.021325554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:40.021475 containerd[1494]: time="2025-11-08T00:31:40.021359477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.021546 containerd[1494]: time="2025-11-08T00:31:40.021494090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.022251 containerd[1494]: time="2025-11-08T00:31:40.019311437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:40.022251 containerd[1494]: time="2025-11-08T00:31:40.022040915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:40.022251 containerd[1494]: time="2025-11-08T00:31:40.022053789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.022251 containerd[1494]: time="2025-11-08T00:31:40.022121406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.022651 containerd[1494]: time="2025-11-08T00:31:40.022571129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:40.022651 containerd[1494]: time="2025-11-08T00:31:40.022635490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:40.022738 containerd[1494]: time="2025-11-08T00:31:40.022652111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.022761 containerd[1494]: time="2025-11-08T00:31:40.022738022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:40.086982 kubelet[2136]: I1108 00:31:40.086742 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:40.087315 kubelet[2136]: E1108 00:31:40.087159 2136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Nov 8 00:31:40.114409 systemd[1]: Started cri-containerd-5ddcd20828e50db88f4fbf7ab7d4d7440d00ac4701d091747fbd82e10877ea19.scope - libcontainer container 5ddcd20828e50db88f4fbf7ab7d4d7440d00ac4701d091747fbd82e10877ea19. Nov 8 00:31:40.124166 systemd[1]: Started cri-containerd-bff90ccf6b93d1772b96703c792cd10512dc1453218d99002790bfc853a9ddf5.scope - libcontainer container bff90ccf6b93d1772b96703c792cd10512dc1453218d99002790bfc853a9ddf5. Nov 8 00:31:40.147120 systemd[1]: Started cri-containerd-dad51443992be22c7371a60281baa63d500e1b30a615ff75ee8476b4c34c3556.scope - libcontainer container dad51443992be22c7371a60281baa63d500e1b30a615ff75ee8476b4c34c3556. Nov 8 00:31:40.238835 containerd[1494]: time="2025-11-08T00:31:40.238794097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:182542cf907c12189406c5abc78bb4f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ddcd20828e50db88f4fbf7ab7d4d7440d00ac4701d091747fbd82e10877ea19\"" Nov 8 00:31:40.240029 kubelet[2136]: E1108 00:31:40.239843 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:40.240114 containerd[1494]: time="2025-11-08T00:31:40.239849716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"bff90ccf6b93d1772b96703c792cd10512dc1453218d99002790bfc853a9ddf5\"" Nov 8 00:31:40.240865 kubelet[2136]: E1108 00:31:40.240838 2136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:31:40.241706 kubelet[2136]: E1108 00:31:40.241687 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:40.242682 containerd[1494]: time="2025-11-08T00:31:40.242638135Z" level=info msg="CreateContainer within sandbox \"5ddcd20828e50db88f4fbf7ab7d4d7440d00ac4701d091747fbd82e10877ea19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:31:40.244096 containerd[1494]: time="2025-11-08T00:31:40.244072033Z" level=info msg="CreateContainer within sandbox \"bff90ccf6b93d1772b96703c792cd10512dc1453218d99002790bfc853a9ddf5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:31:40.255828 containerd[1494]: time="2025-11-08T00:31:40.255734300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dad51443992be22c7371a60281baa63d500e1b30a615ff75ee8476b4c34c3556\"" Nov 8 00:31:40.256438 kubelet[2136]: E1108 00:31:40.256403 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:40.258250 containerd[1494]: time="2025-11-08T00:31:40.258221904Z" level=info msg="CreateContainer within sandbox \"dad51443992be22c7371a60281baa63d500e1b30a615ff75ee8476b4c34c3556\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:31:40.739179 containerd[1494]: time="2025-11-08T00:31:40.739128895Z" level=info msg="CreateContainer within sandbox \"dad51443992be22c7371a60281baa63d500e1b30a615ff75ee8476b4c34c3556\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbf9577cfb49023161e7efea557f9f6fc72eeda463dc06c719d7d796aeb1280f\"" Nov 8 00:31:40.740037 containerd[1494]: time="2025-11-08T00:31:40.739755680Z" level=info msg="StartContainer for \"bbf9577cfb49023161e7efea557f9f6fc72eeda463dc06c719d7d796aeb1280f\"" Nov 8 00:31:40.742136 containerd[1494]: time="2025-11-08T00:31:40.742095808Z" level=info msg="CreateContainer within sandbox \"bff90ccf6b93d1772b96703c792cd10512dc1453218d99002790bfc853a9ddf5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc6256b617af340709fdf111e93ca04c24387186a6295e38a658995b12bc2935\"" Nov 8 00:31:40.742589 containerd[1494]: time="2025-11-08T00:31:40.742554518Z" level=info msg="StartContainer for \"fc6256b617af340709fdf111e93ca04c24387186a6295e38a658995b12bc2935\"" Nov 8 00:31:40.745001 containerd[1494]: time="2025-11-08T00:31:40.744947515Z" level=info msg="CreateContainer within sandbox \"5ddcd20828e50db88f4fbf7ab7d4d7440d00ac4701d091747fbd82e10877ea19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00a9a960d1649a22d47f0d893ad7bcd1643aac86673d45948221d1ee5f434ee7\"" Nov 8 00:31:40.747970 containerd[1494]: time="2025-11-08T00:31:40.745411014Z" level=info msg="StartContainer for \"00a9a960d1649a22d47f0d893ad7bcd1643aac86673d45948221d1ee5f434ee7\"" Nov 8 00:31:40.830264 systemd[1]: Started cri-containerd-fc6256b617af340709fdf111e93ca04c24387186a6295e38a658995b12bc2935.scope - libcontainer container fc6256b617af340709fdf111e93ca04c24387186a6295e38a658995b12bc2935. Nov 8 00:31:40.833759 systemd[1]: Started cri-containerd-bbf9577cfb49023161e7efea557f9f6fc72eeda463dc06c719d7d796aeb1280f.scope - libcontainer container bbf9577cfb49023161e7efea557f9f6fc72eeda463dc06c719d7d796aeb1280f. Nov 8 00:31:40.837150 systemd[1]: Started cri-containerd-00a9a960d1649a22d47f0d893ad7bcd1643aac86673d45948221d1ee5f434ee7.scope - libcontainer container 00a9a960d1649a22d47f0d893ad7bcd1643aac86673d45948221d1ee5f434ee7. Nov 8 00:31:40.895528 containerd[1494]: time="2025-11-08T00:31:40.895377421Z" level=info msg="StartContainer for \"00a9a960d1649a22d47f0d893ad7bcd1643aac86673d45948221d1ee5f434ee7\" returns successfully" Nov 8 00:31:40.897748 containerd[1494]: time="2025-11-08T00:31:40.897505220Z" level=info msg="StartContainer for \"bbf9577cfb49023161e7efea557f9f6fc72eeda463dc06c719d7d796aeb1280f\" returns successfully" Nov 8 00:31:40.897875 containerd[1494]: time="2025-11-08T00:31:40.897851970Z" level=info msg="StartContainer for \"fc6256b617af340709fdf111e93ca04c24387186a6295e38a658995b12bc2935\" returns successfully" Nov 8 00:31:41.114236 kubelet[2136]: E1108 00:31:41.114200 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:41.114731 kubelet[2136]: E1108 00:31:41.114336 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:41.119942 kubelet[2136]: E1108 00:31:41.119912 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:41.120066 kubelet[2136]: E1108 00:31:41.120042 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:41.120279 kubelet[2136]: E1108 00:31:41.120255 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:41.120995 kubelet[2136]: E1108 00:31:41.120364 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:41.689263 kubelet[2136]: I1108 00:31:41.689219 2136 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:42.120649 kubelet[2136]: E1108 00:31:42.120611 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:42.121058 kubelet[2136]: E1108 00:31:42.120762 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:42.123774 kubelet[2136]: E1108 00:31:42.123738 2136 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:31:42.123880 kubelet[2136]: E1108 00:31:42.123858 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:42.262621 kubelet[2136]: E1108 00:31:42.262553 2136 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:31:42.576856 kubelet[2136]: I1108 00:31:42.576695 2136 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:31:42.590406 kubelet[2136]: I1108 00:31:42.590323 2136 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:43.011575 kubelet[2136]: E1108 00:31:43.011510 2136 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:43.011575 kubelet[2136]: I1108 00:31:43.011551 2136 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:43.013092 kubelet[2136]: E1108 00:31:43.013052 2136 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:43.013092 kubelet[2136]: I1108 00:31:43.013072 2136 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:43.014392 kubelet[2136]: E1108 00:31:43.014347 2136 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:43.085507 kubelet[2136]: I1108 00:31:43.085478 2136 apiserver.go:52] "Watching apiserver" Nov 8 00:31:43.186410 kubelet[2136]: I1108 00:31:43.186350 2136 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:31:43.573886 kubelet[2136]: I1108 00:31:43.573846 2136 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:43.662362 kubelet[2136]: E1108 00:31:43.662311 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:44.122792 kubelet[2136]: E1108 00:31:44.122747 2136 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:45.042554 systemd[1]: Reloading requested from client PID 2421 ('systemctl') (unit session-7.scope)... Nov 8 00:31:45.042571 systemd[1]: Reloading... Nov 8 00:31:45.116990 zram_generator::config[2461]: No configuration found. Nov 8 00:31:45.225225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:31:45.315849 systemd[1]: Reloading finished in 272 ms. Nov 8 00:31:45.360428 kubelet[2136]: I1108 00:31:45.360371 2136 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:31:45.360518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:45.373540 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:31:45.373847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:45.373905 systemd[1]: kubelet.service: Consumed 1.143s CPU time, 134.2M memory peak, 0B memory swap peak. Nov 8 00:31:45.388226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:31:45.555131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:31:45.560942 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:31:45.602854 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:45.602854 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:31:45.602854 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:31:45.603339 kubelet[2505]: I1108 00:31:45.602924 2505 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:31:45.610416 kubelet[2505]: I1108 00:31:45.610371 2505 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:31:45.610416 kubelet[2505]: I1108 00:31:45.610404 2505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:31:45.610684 kubelet[2505]: I1108 00:31:45.610661 2505 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:31:45.611879 kubelet[2505]: I1108 00:31:45.611854 2505 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:31:45.615306 kubelet[2505]: I1108 00:31:45.615272 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:31:45.619468 kubelet[2505]: E1108 00:31:45.619421 2505 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:31:45.619468 kubelet[2505]: I1108 00:31:45.619451 2505 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:31:45.624523 kubelet[2505]: I1108 00:31:45.624479 2505 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:31:45.624819 kubelet[2505]: I1108 00:31:45.624777 2505 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:31:45.624986 kubelet[2505]: I1108 00:31:45.624813 2505 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:31:45.625082 kubelet[2505]: I1108 00:31:45.624996 2505 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:31:45.625082 kubelet[2505]: I1108 00:31:45.625006 2505 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:31:45.625082 kubelet[2505]: I1108 00:31:45.625057 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:45.625234 kubelet[2505]: I1108 00:31:45.625213 2505 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:31:45.625263 kubelet[2505]: I1108 00:31:45.625237 2505 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:31:45.625263 kubelet[2505]: I1108 00:31:45.625255 2505 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:31:45.625325 kubelet[2505]: I1108 00:31:45.625266 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:31:45.625985 kubelet[2505]: I1108 00:31:45.625761 2505 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:31:45.626129 kubelet[2505]: I1108 00:31:45.626101 2505 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.626559 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.626590 2505 server.go:1287] "Started kubelet" Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.627513 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.627866 2505 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.627925 2505 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.628386 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:31:45.631003 kubelet[2505]: I1108 00:31:45.630028 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:31:45.636121 kubelet[2505]: I1108 00:31:45.635406 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:31:45.636121 kubelet[2505]: E1108 00:31:45.635723 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:31:45.636349 kubelet[2505]: I1108 00:31:45.636334 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:31:45.636577 kubelet[2505]: I1108 00:31:45.636563 2505 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:31:45.637080 kubelet[2505]: I1108 00:31:45.637063 2505 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:31:45.646292 kubelet[2505]: I1108 00:31:45.646251 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:31:45.646836 kubelet[2505]: I1108 00:31:45.646818 2505 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:31:45.646992 kubelet[2505]: I1108 00:31:45.646972 2505 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:31:45.648229 kubelet[2505]: I1108 00:31:45.648214 2505 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:31:45.648326 kubelet[2505]: I1108 00:31:45.648288 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:31:45.648542 kubelet[2505]: I1108 00:31:45.648529 2505 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:31:45.648625 kubelet[2505]: I1108 00:31:45.648612 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:31:45.648700 kubelet[2505]: I1108 00:31:45.648690 2505 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:31:45.648814 kubelet[2505]: E1108 00:31:45.648793 2505 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:31:45.650095 kubelet[2505]: E1108 00:31:45.650066 2505 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:31:45.680756 kubelet[2505]: I1108 00:31:45.680724 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:31:45.680756 kubelet[2505]: I1108 00:31:45.680745 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:31:45.680869 kubelet[2505]: I1108 00:31:45.680769 2505 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:31:45.680975 kubelet[2505]: I1108 00:31:45.680934 2505 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:31:45.681018 kubelet[2505]: I1108 00:31:45.680967 2505 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:31:45.681018 kubelet[2505]: I1108 00:31:45.680991 2505 policy_none.go:49] "None policy: Start" Nov 8 00:31:45.681018 kubelet[2505]: I1108 00:31:45.681001 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:31:45.681018 kubelet[2505]: I1108 00:31:45.681011 2505 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:31:45.681118 kubelet[2505]: I1108 00:31:45.681101 2505 state_mem.go:75] "Updated machine memory state" Nov 8 00:31:45.684864 kubelet[2505]: I1108 00:31:45.684832 2505 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:31:45.685065 kubelet[2505]: I1108 00:31:45.685047 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:31:45.685123 kubelet[2505]: I1108 00:31:45.685063 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:31:45.685395 kubelet[2505]: I1108 00:31:45.685234 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:31:45.686100 kubelet[2505]: E1108 00:31:45.686080 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:31:45.750187 kubelet[2505]: I1108 00:31:45.750125 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:45.750526 kubelet[2505]: I1108 00:31:45.750137 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:45.750526 kubelet[2505]: I1108 00:31:45.750137 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:45.757736 kubelet[2505]: E1108 00:31:45.757701 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:45.790317 kubelet[2505]: I1108 00:31:45.790293 2505 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:31:45.795948 kubelet[2505]: I1108 00:31:45.795918 2505 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:31:45.796066 kubelet[2505]: I1108 00:31:45.796051 2505 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:31:45.837948 kubelet[2505]: I1108 00:31:45.837890 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:45.837948 kubelet[2505]: I1108 00:31:45.837935 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:45.838186 kubelet[2505]: I1108 00:31:45.837980 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:45.838186 kubelet[2505]: I1108 00:31:45.838001 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:45.838186 kubelet[2505]: I1108 00:31:45.838031 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:45.838186 kubelet[2505]: I1108 00:31:45.838053 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/182542cf907c12189406c5abc78bb4f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"182542cf907c12189406c5abc78bb4f2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:45.838186 kubelet[2505]: I1108 00:31:45.838074 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:45.838305 kubelet[2505]: I1108 00:31:45.838093 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:45.838305 kubelet[2505]: I1108 00:31:45.838113 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:31:46.056668 kubelet[2505]: E1108 00:31:46.056567 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:46.058760 kubelet[2505]: E1108 00:31:46.058682 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:46.058881 kubelet[2505]: E1108 00:31:46.058859 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:46.625766 kubelet[2505]: I1108 00:31:46.625722 2505 apiserver.go:52] "Watching apiserver" Nov 8 00:31:46.637042 kubelet[2505]: I1108 00:31:46.637015 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:31:46.664460 kubelet[2505]: I1108 00:31:46.664401 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:46.664460 kubelet[2505]: E1108 00:31:46.664418 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:46.664638 kubelet[2505]: I1108 00:31:46.664548 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:47.033088 kubelet[2505]: E1108 00:31:47.032423 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:31:47.033088 kubelet[2505]: E1108 00:31:47.032716 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:47.033088 kubelet[2505]: E1108 00:31:47.033002 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:31:47.033342 kubelet[2505]: E1108 00:31:47.033185 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:47.033342 kubelet[2505]: I1108 00:31:47.033185 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.033176093 podStartE2EDuration="2.033176093s" podCreationTimestamp="2025-11-08 00:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:47.032927863 +0000 UTC m=+1.467146477" watchObservedRunningTime="2025-11-08 00:31:47.033176093 +0000 UTC m=+1.467394707" Nov 8 00:31:47.054368 kubelet[2505]: I1108 00:31:47.054281 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.054258768 podStartE2EDuration="2.054258768s" podCreationTimestamp="2025-11-08 00:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:47.043292234 +0000 UTC m=+1.477510848" watchObservedRunningTime="2025-11-08 00:31:47.054258768 +0000 UTC m=+1.488477382" Nov 8 00:31:47.054698 kubelet[2505]: I1108 00:31:47.054431 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.054426752 podStartE2EDuration="4.054426752s" podCreationTimestamp="2025-11-08 00:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:47.04963479 +0000 UTC m=+1.483853404" watchObservedRunningTime="2025-11-08 00:31:47.054426752 +0000 UTC m=+1.488645366" Nov 8 00:31:47.665710 kubelet[2505]: E1108 00:31:47.665675 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:47.666287 kubelet[2505]: E1108 00:31:47.665896 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:48.266463 kubelet[2505]: E1108 00:31:48.266427 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:48.667721 kubelet[2505]: E1108 00:31:48.667682 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:50.607591 kubelet[2505]: E1108 00:31:50.607534 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:50.670142 kubelet[2505]: E1108 00:31:50.670092 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:51.735392 kubelet[2505]: I1108 00:31:51.735358 2505 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:31:51.735845 containerd[1494]: time="2025-11-08T00:31:51.735804419Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:31:51.736204 kubelet[2505]: I1108 00:31:51.736048 2505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:31:52.398347 systemd[1]: Created slice kubepods-besteffort-pod540e3fe9_b9a9_4529_8eeb_baa1d8e3d2ba.slice - libcontainer container kubepods-besteffort-pod540e3fe9_b9a9_4529_8eeb_baa1d8e3d2ba.slice. Nov 8 00:31:52.479979 kubelet[2505]: I1108 00:31:52.479911 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstnk\" (UniqueName: \"kubernetes.io/projected/540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba-kube-api-access-wstnk\") pod \"kube-proxy-77bpb\" (UID: \"540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba\") " pod="kube-system/kube-proxy-77bpb" Nov 8 00:31:52.479979 kubelet[2505]: I1108 00:31:52.479973 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba-kube-proxy\") pod \"kube-proxy-77bpb\" (UID: \"540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba\") " pod="kube-system/kube-proxy-77bpb" Nov 8 00:31:52.480174 kubelet[2505]: I1108 00:31:52.480000 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba-xtables-lock\") pod \"kube-proxy-77bpb\" (UID: \"540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba\") " pod="kube-system/kube-proxy-77bpb" Nov 8 00:31:52.480174 kubelet[2505]: I1108 00:31:52.480017 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba-lib-modules\") pod \"kube-proxy-77bpb\" (UID: \"540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba\") " pod="kube-system/kube-proxy-77bpb" Nov 8 00:31:52.818655 systemd[1]: Created slice kubepods-besteffort-pod61930f8c_9279_4f9c_9107_a0c0cf801395.slice - libcontainer container kubepods-besteffort-pod61930f8c_9279_4f9c_9107_a0c0cf801395.slice. Nov 8 00:31:52.884801 kubelet[2505]: I1108 00:31:52.884723 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78wv\" (UniqueName: \"kubernetes.io/projected/61930f8c-9279-4f9c-9107-a0c0cf801395-kube-api-access-h78wv\") pod \"tigera-operator-7dcd859c48-2hkrp\" (UID: \"61930f8c-9279-4f9c-9107-a0c0cf801395\") " pod="tigera-operator/tigera-operator-7dcd859c48-2hkrp" Nov 8 00:31:52.884801 kubelet[2505]: I1108 00:31:52.884777 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/61930f8c-9279-4f9c-9107-a0c0cf801395-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2hkrp\" (UID: \"61930f8c-9279-4f9c-9107-a0c0cf801395\") " pod="tigera-operator/tigera-operator-7dcd859c48-2hkrp" Nov 8 00:31:53.013618 kubelet[2505]: E1108 00:31:53.013578 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:53.014164 containerd[1494]: time="2025-11-08T00:31:53.014128610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77bpb,Uid:540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:53.039794 containerd[1494]: time="2025-11-08T00:31:53.039541552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:53.039794 containerd[1494]: time="2025-11-08T00:31:53.039622987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:53.039794 containerd[1494]: time="2025-11-08T00:31:53.039640630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:53.039794 containerd[1494]: time="2025-11-08T00:31:53.039729521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:53.068208 systemd[1]: Started cri-containerd-e4696d7e4c5fff1ea15c065120059c82d58a4d2ced038eef8ba3c2a5dfc1e803.scope - libcontainer container e4696d7e4c5fff1ea15c065120059c82d58a4d2ced038eef8ba3c2a5dfc1e803. Nov 8 00:31:53.092388 containerd[1494]: time="2025-11-08T00:31:53.092349194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-77bpb,Uid:540e3fe9-b9a9-4529-8eeb-baa1d8e3d2ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4696d7e4c5fff1ea15c065120059c82d58a4d2ced038eef8ba3c2a5dfc1e803\"" Nov 8 00:31:53.093181 kubelet[2505]: E1108 00:31:53.093155 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:53.095050 containerd[1494]: time="2025-11-08T00:31:53.095017468Z" level=info msg="CreateContainer within sandbox \"e4696d7e4c5fff1ea15c065120059c82d58a4d2ced038eef8ba3c2a5dfc1e803\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:31:53.112471 containerd[1494]: time="2025-11-08T00:31:53.112419646Z" level=info msg="CreateContainer within sandbox \"e4696d7e4c5fff1ea15c065120059c82d58a4d2ced038eef8ba3c2a5dfc1e803\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ddf8d31fba90a9ea33b556658ad7e1e2bd7a660342f3276c7b324d95ae6c4ca2\"" Nov 8 00:31:53.113073 containerd[1494]: time="2025-11-08T00:31:53.113049892Z" level=info msg="StartContainer for \"ddf8d31fba90a9ea33b556658ad7e1e2bd7a660342f3276c7b324d95ae6c4ca2\"" Nov 8 00:31:53.122295 containerd[1494]: time="2025-11-08T00:31:53.122245652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2hkrp,Uid:61930f8c-9279-4f9c-9107-a0c0cf801395,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:31:53.141174 systemd[1]: Started cri-containerd-ddf8d31fba90a9ea33b556658ad7e1e2bd7a660342f3276c7b324d95ae6c4ca2.scope - libcontainer container ddf8d31fba90a9ea33b556658ad7e1e2bd7a660342f3276c7b324d95ae6c4ca2. Nov 8 00:31:53.239097 containerd[1494]: time="2025-11-08T00:31:53.238903891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:53.239097 containerd[1494]: time="2025-11-08T00:31:53.238964647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:53.239097 containerd[1494]: time="2025-11-08T00:31:53.238976310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:53.239097 containerd[1494]: time="2025-11-08T00:31:53.239051303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:53.251193 containerd[1494]: time="2025-11-08T00:31:53.250928392Z" level=info msg="StartContainer for \"ddf8d31fba90a9ea33b556658ad7e1e2bd7a660342f3276c7b324d95ae6c4ca2\" returns successfully" Nov 8 00:31:53.263661 systemd[1]: Started cri-containerd-2c6bcbdc73bf0c1b8cc5db1593ce2f55502f4b9c0066b1420f2629d57fd0ccde.scope - libcontainer container 2c6bcbdc73bf0c1b8cc5db1593ce2f55502f4b9c0066b1420f2629d57fd0ccde. Nov 8 00:31:53.305771 containerd[1494]: time="2025-11-08T00:31:53.305632342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2hkrp,Uid:61930f8c-9279-4f9c-9107-a0c0cf801395,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c6bcbdc73bf0c1b8cc5db1593ce2f55502f4b9c0066b1420f2629d57fd0ccde\"" Nov 8 00:31:53.307437 containerd[1494]: time="2025-11-08T00:31:53.307405964Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:31:53.676427 kubelet[2505]: E1108 00:31:53.676387 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:57.335470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984983068.mount: Deactivated successfully. Nov 8 00:31:57.456293 kubelet[2505]: E1108 00:31:57.456260 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:57.465040 kubelet[2505]: I1108 00:31:57.464726 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-77bpb" podStartSLOduration=5.46470998 podStartE2EDuration="5.46470998s" podCreationTimestamp="2025-11-08 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:53.688010852 +0000 UTC m=+8.122229466" watchObservedRunningTime="2025-11-08 00:31:57.46470998 +0000 UTC m=+11.898928594" Nov 8 00:31:57.683851 kubelet[2505]: E1108 00:31:57.683725 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:57.686946 containerd[1494]: time="2025-11-08T00:31:57.686881627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:57.687637 containerd[1494]: time="2025-11-08T00:31:57.687566792Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:31:57.688698 containerd[1494]: time="2025-11-08T00:31:57.688651727Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:57.690847 containerd[1494]: time="2025-11-08T00:31:57.690810228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:57.691455 containerd[1494]: time="2025-11-08T00:31:57.691421522Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.383978296s" Nov 8 00:31:57.691492 containerd[1494]: time="2025-11-08T00:31:57.691452711Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:31:57.693402 containerd[1494]: time="2025-11-08T00:31:57.693362488Z" level=info msg="CreateContainer within sandbox \"2c6bcbdc73bf0c1b8cc5db1593ce2f55502f4b9c0066b1420f2629d57fd0ccde\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:31:57.705460 containerd[1494]: time="2025-11-08T00:31:57.705416571Z" level=info msg="CreateContainer within sandbox \"2c6bcbdc73bf0c1b8cc5db1593ce2f55502f4b9c0066b1420f2629d57fd0ccde\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b8a565354da933ef129d178f31bac2e8a248550051dc18f70a4b8a2319c54df7\"" Nov 8 00:31:57.705495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614459331.mount: Deactivated successfully. Nov 8 00:31:57.706168 containerd[1494]: time="2025-11-08T00:31:57.706138347Z" level=info msg="StartContainer for \"b8a565354da933ef129d178f31bac2e8a248550051dc18f70a4b8a2319c54df7\"" Nov 8 00:31:57.737104 systemd[1]: Started cri-containerd-b8a565354da933ef129d178f31bac2e8a248550051dc18f70a4b8a2319c54df7.scope - libcontainer container b8a565354da933ef129d178f31bac2e8a248550051dc18f70a4b8a2319c54df7. Nov 8 00:31:57.762734 containerd[1494]: time="2025-11-08T00:31:57.762589783Z" level=info msg="StartContainer for \"b8a565354da933ef129d178f31bac2e8a248550051dc18f70a4b8a2319c54df7\" returns successfully" Nov 8 00:31:58.270375 kubelet[2505]: E1108 00:31:58.270342 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:31:58.696926 kubelet[2505]: I1108 00:31:58.696860 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2hkrp" podStartSLOduration=2.3114563759999998 podStartE2EDuration="6.696840174s" podCreationTimestamp="2025-11-08 00:31:52 +0000 UTC" firstStartedPulling="2025-11-08 00:31:53.306764528 +0000 UTC m=+7.740983142" lastFinishedPulling="2025-11-08 00:31:57.692148326 +0000 UTC m=+12.126366940" observedRunningTime="2025-11-08 00:31:58.69668014 +0000 UTC m=+13.130898754" watchObservedRunningTime="2025-11-08 00:31:58.696840174 +0000 UTC m=+13.131058788" Nov 8 00:31:59.990142 update_engine[1459]: I20251108 00:31:59.990036 1459 update_attempter.cc:509] Updating boot flags... Nov 8 00:32:00.315999 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2864) Nov 8 00:32:00.417878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2866) Nov 8 00:32:01.207804 sudo[1650]: pam_unix(sudo:session): session closed for user root Nov 8 00:32:01.212587 sshd[1647]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:01.216597 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:54956.service: Deactivated successfully. Nov 8 00:32:01.219618 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:32:01.219995 systemd[1]: session-7.scope: Consumed 4.200s CPU time, 157.0M memory peak, 0B memory swap peak. Nov 8 00:32:01.222872 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:32:01.226185 systemd-logind[1456]: Removed session 7. Nov 8 00:32:05.458978 systemd[1]: Created slice kubepods-besteffort-podeb64dadd_87c6_424f_a880_6bbbce828efb.slice - libcontainer container kubepods-besteffort-podeb64dadd_87c6_424f_a880_6bbbce828efb.slice. Nov 8 00:32:05.468503 kubelet[2505]: I1108 00:32:05.468445 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvpfb\" (UniqueName: \"kubernetes.io/projected/eb64dadd-87c6-424f-a880-6bbbce828efb-kube-api-access-mvpfb\") pod \"calico-typha-7b8b674c4d-xc56d\" (UID: \"eb64dadd-87c6-424f-a880-6bbbce828efb\") " pod="calico-system/calico-typha-7b8b674c4d-xc56d" Nov 8 00:32:05.468503 kubelet[2505]: I1108 00:32:05.468503 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb64dadd-87c6-424f-a880-6bbbce828efb-tigera-ca-bundle\") pod \"calico-typha-7b8b674c4d-xc56d\" (UID: \"eb64dadd-87c6-424f-a880-6bbbce828efb\") " pod="calico-system/calico-typha-7b8b674c4d-xc56d" Nov 8 00:32:05.468885 kubelet[2505]: I1108 00:32:05.468523 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb64dadd-87c6-424f-a880-6bbbce828efb-typha-certs\") pod \"calico-typha-7b8b674c4d-xc56d\" (UID: \"eb64dadd-87c6-424f-a880-6bbbce828efb\") " pod="calico-system/calico-typha-7b8b674c4d-xc56d" Nov 8 00:32:05.590002 systemd[1]: Created slice kubepods-besteffort-pod7119d1a2_cee5_4372_92c7_ba17734e4f46.slice - libcontainer container kubepods-besteffort-pod7119d1a2_cee5_4372_92c7_ba17734e4f46.slice. Nov 8 00:32:05.669301 kubelet[2505]: I1108 00:32:05.669244 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-cni-log-dir\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669301 kubelet[2505]: I1108 00:32:05.669282 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-cni-bin-dir\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669301 kubelet[2505]: I1108 00:32:05.669298 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-cni-net-dir\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669509 kubelet[2505]: I1108 00:32:05.669316 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-xtables-lock\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669509 kubelet[2505]: I1108 00:32:05.669333 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7119d1a2-cee5-4372-92c7-ba17734e4f46-node-certs\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669509 kubelet[2505]: I1108 00:32:05.669347 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-var-run-calico\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669509 kubelet[2505]: I1108 00:32:05.669423 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-flexvol-driver-host\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669509 kubelet[2505]: I1108 00:32:05.669500 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-lib-modules\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669635 kubelet[2505]: I1108 00:32:05.669524 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7119d1a2-cee5-4372-92c7-ba17734e4f46-tigera-ca-bundle\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669635 kubelet[2505]: I1108 00:32:05.669540 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-var-lib-calico\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669635 kubelet[2505]: I1108 00:32:05.669555 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7119d1a2-cee5-4372-92c7-ba17734e4f46-policysync\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.669635 kubelet[2505]: I1108 00:32:05.669573 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcq24\" (UniqueName: \"kubernetes.io/projected/7119d1a2-cee5-4372-92c7-ba17734e4f46-kube-api-access-vcq24\") pod \"calico-node-plrz8\" (UID: \"7119d1a2-cee5-4372-92c7-ba17734e4f46\") " pod="calico-system/calico-node-plrz8" Nov 8 00:32:05.724587 kubelet[2505]: E1108 00:32:05.723809 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:05.763608 kubelet[2505]: E1108 00:32:05.763573 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:05.764785 containerd[1494]: time="2025-11-08T00:32:05.764710175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8b674c4d-xc56d,Uid:eb64dadd-87c6-424f-a880-6bbbce828efb,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:05.770890 kubelet[2505]: I1108 00:32:05.770158 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88835561-0fd8-4963-bbc3-b0aaf46c9820-varrun\") pod \"csi-node-driver-lkl4b\" (UID: \"88835561-0fd8-4963-bbc3-b0aaf46c9820\") " pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:05.770890 kubelet[2505]: I1108 00:32:05.770222 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88835561-0fd8-4963-bbc3-b0aaf46c9820-kubelet-dir\") pod \"csi-node-driver-lkl4b\" (UID: \"88835561-0fd8-4963-bbc3-b0aaf46c9820\") " pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:05.770890 kubelet[2505]: I1108 00:32:05.770280 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86mmz\" (UniqueName: \"kubernetes.io/projected/88835561-0fd8-4963-bbc3-b0aaf46c9820-kube-api-access-86mmz\") pod \"csi-node-driver-lkl4b\" (UID: \"88835561-0fd8-4963-bbc3-b0aaf46c9820\") " pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:05.770890 kubelet[2505]: I1108 00:32:05.770348 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88835561-0fd8-4963-bbc3-b0aaf46c9820-socket-dir\") pod \"csi-node-driver-lkl4b\" (UID: \"88835561-0fd8-4963-bbc3-b0aaf46c9820\") " pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:05.770890 kubelet[2505]: I1108 00:32:05.770380 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88835561-0fd8-4963-bbc3-b0aaf46c9820-registration-dir\") pod \"csi-node-driver-lkl4b\" (UID: \"88835561-0fd8-4963-bbc3-b0aaf46c9820\") " pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:05.773916 kubelet[2505]: E1108 00:32:05.773871 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.773916 kubelet[2505]: W1108 00:32:05.773902 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.774065 kubelet[2505]: E1108 00:32:05.773946 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.781728 kubelet[2505]: E1108 00:32:05.781647 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.781728 kubelet[2505]: W1108 00:32:05.781669 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.781728 kubelet[2505]: E1108 00:32:05.781689 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.807406 kubelet[2505]: E1108 00:32:05.807370 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.812037 kubelet[2505]: W1108 00:32:05.807683 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.812037 kubelet[2505]: E1108 00:32:05.808040 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.825010 containerd[1494]: time="2025-11-08T00:32:05.824667872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:05.825010 containerd[1494]: time="2025-11-08T00:32:05.824735880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:05.825010 containerd[1494]: time="2025-11-08T00:32:05.824750398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:05.825010 containerd[1494]: time="2025-11-08T00:32:05.824843364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:05.849141 systemd[1]: Started cri-containerd-cc4ec2e96590ada2a95721e3b2b1bbdc55bfd4938bb38a60027bc1b09a205fbd.scope - libcontainer container cc4ec2e96590ada2a95721e3b2b1bbdc55bfd4938bb38a60027bc1b09a205fbd. Nov 8 00:32:05.872447 kubelet[2505]: E1108 00:32:05.872407 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.872447 kubelet[2505]: W1108 00:32:05.872434 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.872447 kubelet[2505]: E1108 00:32:05.872454 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.874198 kubelet[2505]: E1108 00:32:05.874174 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.874198 kubelet[2505]: W1108 00:32:05.874191 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.874305 kubelet[2505]: E1108 00:32:05.874210 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.874558 kubelet[2505]: E1108 00:32:05.874525 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.874596 kubelet[2505]: W1108 00:32:05.874573 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.874596 kubelet[2505]: E1108 00:32:05.874589 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.874934 kubelet[2505]: E1108 00:32:05.874911 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.874934 kubelet[2505]: W1108 00:32:05.874927 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.875162 kubelet[2505]: E1108 00:32:05.874946 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.875228 kubelet[2505]: E1108 00:32:05.875206 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.875228 kubelet[2505]: W1108 00:32:05.875221 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.875281 kubelet[2505]: E1108 00:32:05.875233 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.875560 kubelet[2505]: E1108 00:32:05.875537 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.875560 kubelet[2505]: W1108 00:32:05.875554 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.875619 kubelet[2505]: E1108 00:32:05.875573 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.875848 kubelet[2505]: E1108 00:32:05.875826 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.875848 kubelet[2505]: W1108 00:32:05.875840 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.875905 kubelet[2505]: E1108 00:32:05.875863 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.876140 kubelet[2505]: E1108 00:32:05.876115 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.876180 kubelet[2505]: W1108 00:32:05.876154 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.876180 kubelet[2505]: E1108 00:32:05.876173 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.876484 kubelet[2505]: E1108 00:32:05.876458 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.876524 kubelet[2505]: W1108 00:32:05.876504 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.876613 kubelet[2505]: E1108 00:32:05.876589 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.876882 kubelet[2505]: E1108 00:32:05.876860 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.876882 kubelet[2505]: W1108 00:32:05.876875 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.876965 kubelet[2505]: E1108 00:32:05.876920 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.877160 kubelet[2505]: E1108 00:32:05.877139 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.877160 kubelet[2505]: W1108 00:32:05.877155 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.877255 kubelet[2505]: E1108 00:32:05.877233 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.878134 kubelet[2505]: E1108 00:32:05.878109 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.878134 kubelet[2505]: W1108 00:32:05.878127 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.878343 kubelet[2505]: E1108 00:32:05.878318 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.878757 kubelet[2505]: E1108 00:32:05.878733 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.878757 kubelet[2505]: W1108 00:32:05.878753 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.879042 kubelet[2505]: E1108 00:32:05.879017 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.879146 kubelet[2505]: E1108 00:32:05.879126 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.879146 kubelet[2505]: W1108 00:32:05.879140 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.879385 kubelet[2505]: E1108 00:32:05.879362 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.879829 kubelet[2505]: E1108 00:32:05.879806 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.879829 kubelet[2505]: W1108 00:32:05.879823 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.879988 kubelet[2505]: E1108 00:32:05.879964 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.881897 kubelet[2505]: E1108 00:32:05.881578 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.881897 kubelet[2505]: W1108 00:32:05.881596 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.881897 kubelet[2505]: E1108 00:32:05.881683 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.882000 kubelet[2505]: E1108 00:32:05.881935 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.882000 kubelet[2505]: W1108 00:32:05.881945 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.882049 kubelet[2505]: E1108 00:32:05.882013 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.883256 kubelet[2505]: E1108 00:32:05.883011 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.883256 kubelet[2505]: W1108 00:32:05.883027 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.883256 kubelet[2505]: E1108 00:32:05.883088 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.883412 kubelet[2505]: E1108 00:32:05.883309 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.883412 kubelet[2505]: W1108 00:32:05.883318 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.883745 kubelet[2505]: E1108 00:32:05.883591 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.883906 kubelet[2505]: E1108 00:32:05.883880 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.883906 kubelet[2505]: W1108 00:32:05.883896 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.884935 kubelet[2505]: E1108 00:32:05.884912 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.885899 kubelet[2505]: E1108 00:32:05.885696 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.885899 kubelet[2505]: W1108 00:32:05.885713 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.885899 kubelet[2505]: E1108 00:32:05.885791 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.887024 kubelet[2505]: E1108 00:32:05.886997 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.887024 kubelet[2505]: W1108 00:32:05.887015 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.887233 kubelet[2505]: E1108 00:32:05.887209 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.890077 kubelet[2505]: E1108 00:32:05.889096 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.890077 kubelet[2505]: W1108 00:32:05.889111 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.891885 kubelet[2505]: E1108 00:32:05.890437 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.891885 kubelet[2505]: E1108 00:32:05.890854 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.891885 kubelet[2505]: W1108 00:32:05.890876 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.891885 kubelet[2505]: E1108 00:32:05.891036 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.891885 kubelet[2505]: E1108 00:32:05.891290 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.891885 kubelet[2505]: W1108 00:32:05.891299 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.891885 kubelet[2505]: E1108 00:32:05.891309 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.892531 kubelet[2505]: E1108 00:32:05.892496 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:05.896133 containerd[1494]: time="2025-11-08T00:32:05.895467998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plrz8,Uid:7119d1a2-cee5-4372-92c7-ba17734e4f46,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:05.897018 kubelet[2505]: E1108 00:32:05.896987 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:05.897018 kubelet[2505]: W1108 00:32:05.897007 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:05.897081 kubelet[2505]: E1108 00:32:05.897023 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:05.902037 containerd[1494]: time="2025-11-08T00:32:05.901387810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b8b674c4d-xc56d,Uid:eb64dadd-87c6-424f-a880-6bbbce828efb,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc4ec2e96590ada2a95721e3b2b1bbdc55bfd4938bb38a60027bc1b09a205fbd\"" Nov 8 00:32:05.907236 kubelet[2505]: E1108 00:32:05.907103 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:05.911111 containerd[1494]: time="2025-11-08T00:32:05.911067378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:32:05.929712 containerd[1494]: time="2025-11-08T00:32:05.929620319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:05.929902 containerd[1494]: time="2025-11-08T00:32:05.929718063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:05.929902 containerd[1494]: time="2025-11-08T00:32:05.929732511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:05.929902 containerd[1494]: time="2025-11-08T00:32:05.929841807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:05.948114 systemd[1]: Started cri-containerd-3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc.scope - libcontainer container 3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc. Nov 8 00:32:05.970575 containerd[1494]: time="2025-11-08T00:32:05.970540468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-plrz8,Uid:7119d1a2-cee5-4372-92c7-ba17734e4f46,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\"" Nov 8 00:32:05.971713 kubelet[2505]: E1108 00:32:05.971335 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:07.342292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691483866.mount: Deactivated successfully. Nov 8 00:32:07.650561 kubelet[2505]: E1108 00:32:07.650228 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:07.884785 containerd[1494]: time="2025-11-08T00:32:07.884728715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:07.885833 containerd[1494]: time="2025-11-08T00:32:07.885779552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:32:07.887290 containerd[1494]: time="2025-11-08T00:32:07.887257846Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:07.889278 containerd[1494]: time="2025-11-08T00:32:07.889251395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:07.889854 containerd[1494]: time="2025-11-08T00:32:07.889821273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.978710291s" Nov 8 00:32:07.889892 containerd[1494]: time="2025-11-08T00:32:07.889852371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:32:07.893347 containerd[1494]: time="2025-11-08T00:32:07.893326930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:32:07.911532 containerd[1494]: time="2025-11-08T00:32:07.911436933Z" level=info msg="CreateContainer within sandbox \"cc4ec2e96590ada2a95721e3b2b1bbdc55bfd4938bb38a60027bc1b09a205fbd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:32:07.923578 containerd[1494]: time="2025-11-08T00:32:07.923531727Z" level=info msg="CreateContainer within sandbox \"cc4ec2e96590ada2a95721e3b2b1bbdc55bfd4938bb38a60027bc1b09a205fbd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bf5bc973eea72af88227c56f694f4f1161041197ad3ee406f30764bd88186322\"" Nov 8 00:32:07.926324 containerd[1494]: time="2025-11-08T00:32:07.926287906Z" level=info msg="StartContainer for \"bf5bc973eea72af88227c56f694f4f1161041197ad3ee406f30764bd88186322\"" Nov 8 00:32:07.955098 systemd[1]: Started cri-containerd-bf5bc973eea72af88227c56f694f4f1161041197ad3ee406f30764bd88186322.scope - libcontainer container bf5bc973eea72af88227c56f694f4f1161041197ad3ee406f30764bd88186322. Nov 8 00:32:08.065681 containerd[1494]: time="2025-11-08T00:32:08.065624056Z" level=info msg="StartContainer for \"bf5bc973eea72af88227c56f694f4f1161041197ad3ee406f30764bd88186322\" returns successfully" Nov 8 00:32:08.707011 kubelet[2505]: E1108 00:32:08.706664 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:08.720529 kubelet[2505]: I1108 00:32:08.720456 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b8b674c4d-xc56d" podStartSLOduration=1.735233952 podStartE2EDuration="3.720423237s" podCreationTimestamp="2025-11-08 00:32:05 +0000 UTC" firstStartedPulling="2025-11-08 00:32:05.908029268 +0000 UTC m=+20.342247883" lastFinishedPulling="2025-11-08 00:32:07.893218554 +0000 UTC m=+22.327437168" observedRunningTime="2025-11-08 00:32:08.719715721 +0000 UTC m=+23.153934425" watchObservedRunningTime="2025-11-08 00:32:08.720423237 +0000 UTC m=+23.154641841" Nov 8 00:32:08.777454 kubelet[2505]: E1108 00:32:08.777401 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.777454 kubelet[2505]: W1108 00:32:08.777423 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.777454 kubelet[2505]: E1108 00:32:08.777458 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.777709 kubelet[2505]: E1108 00:32:08.777691 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.777709 kubelet[2505]: W1108 00:32:08.777706 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.777765 kubelet[2505]: E1108 00:32:08.777717 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.777976 kubelet[2505]: E1108 00:32:08.777941 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.777976 kubelet[2505]: W1108 00:32:08.777972 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.778039 kubelet[2505]: E1108 00:32:08.777985 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.778271 kubelet[2505]: E1108 00:32:08.778242 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.778271 kubelet[2505]: W1108 00:32:08.778257 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.778271 kubelet[2505]: E1108 00:32:08.778266 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.778520 kubelet[2505]: E1108 00:32:08.778501 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.778520 kubelet[2505]: W1108 00:32:08.778516 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.778520 kubelet[2505]: E1108 00:32:08.778527 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.778768 kubelet[2505]: E1108 00:32:08.778750 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.778768 kubelet[2505]: W1108 00:32:08.778765 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.778818 kubelet[2505]: E1108 00:32:08.778777 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.779054 kubelet[2505]: E1108 00:32:08.779023 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.779054 kubelet[2505]: W1108 00:32:08.779039 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.779054 kubelet[2505]: E1108 00:32:08.779049 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.779385 kubelet[2505]: E1108 00:32:08.779331 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.779385 kubelet[2505]: W1108 00:32:08.779358 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.779385 kubelet[2505]: E1108 00:32:08.779386 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.779713 kubelet[2505]: E1108 00:32:08.779693 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.779713 kubelet[2505]: W1108 00:32:08.779704 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.779713 kubelet[2505]: E1108 00:32:08.779713 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.779931 kubelet[2505]: E1108 00:32:08.779912 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.779931 kubelet[2505]: W1108 00:32:08.779923 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.779931 kubelet[2505]: E1108 00:32:08.779930 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.780191 kubelet[2505]: E1108 00:32:08.780171 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.780191 kubelet[2505]: W1108 00:32:08.780182 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.780191 kubelet[2505]: E1108 00:32:08.780191 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.780409 kubelet[2505]: E1108 00:32:08.780391 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.780409 kubelet[2505]: W1108 00:32:08.780402 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.780409 kubelet[2505]: E1108 00:32:08.780409 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.780660 kubelet[2505]: E1108 00:32:08.780641 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.780660 kubelet[2505]: W1108 00:32:08.780652 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.780660 kubelet[2505]: E1108 00:32:08.780660 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.780897 kubelet[2505]: E1108 00:32:08.780877 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.780897 kubelet[2505]: W1108 00:32:08.780888 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.780897 kubelet[2505]: E1108 00:32:08.780895 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.781181 kubelet[2505]: E1108 00:32:08.781149 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.781181 kubelet[2505]: W1108 00:32:08.781166 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.781181 kubelet[2505]: E1108 00:32:08.781177 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.798667 kubelet[2505]: E1108 00:32:08.798631 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.798667 kubelet[2505]: W1108 00:32:08.798652 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.798667 kubelet[2505]: E1108 00:32:08.798668 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.799971 kubelet[2505]: E1108 00:32:08.799018 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.799971 kubelet[2505]: W1108 00:32:08.799045 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.799971 kubelet[2505]: E1108 00:32:08.799071 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.799971 kubelet[2505]: E1108 00:32:08.799424 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.799971 kubelet[2505]: W1108 00:32:08.799433 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.799971 kubelet[2505]: E1108 00:32:08.799452 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.800154 kubelet[2505]: E1108 00:32:08.800073 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.800154 kubelet[2505]: W1108 00:32:08.800087 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.800154 kubelet[2505]: E1108 00:32:08.800106 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.801069 kubelet[2505]: E1108 00:32:08.801033 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.801069 kubelet[2505]: W1108 00:32:08.801056 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.801137 kubelet[2505]: E1108 00:32:08.801075 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.801383 kubelet[2505]: E1108 00:32:08.801332 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.801383 kubelet[2505]: W1108 00:32:08.801346 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.801383 kubelet[2505]: E1108 00:32:08.801416 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.801647 kubelet[2505]: E1108 00:32:08.801590 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.801647 kubelet[2505]: W1108 00:32:08.801599 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.801765 kubelet[2505]: E1108 00:32:08.801732 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.801911 kubelet[2505]: E1108 00:32:08.801851 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.801911 kubelet[2505]: W1108 00:32:08.801863 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.801911 kubelet[2505]: E1108 00:32:08.801881 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.802178 kubelet[2505]: E1108 00:32:08.802155 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.802178 kubelet[2505]: W1108 00:32:08.802174 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.802239 kubelet[2505]: E1108 00:32:08.802199 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.802523 kubelet[2505]: E1108 00:32:08.802505 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.802567 kubelet[2505]: W1108 00:32:08.802522 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.802567 kubelet[2505]: E1108 00:32:08.802543 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.802809 kubelet[2505]: E1108 00:32:08.802784 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.802809 kubelet[2505]: W1108 00:32:08.802798 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.802913 kubelet[2505]: E1108 00:32:08.802812 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.803091 kubelet[2505]: E1108 00:32:08.803068 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.803091 kubelet[2505]: W1108 00:32:08.803082 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.803215 kubelet[2505]: E1108 00:32:08.803102 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.803467 kubelet[2505]: E1108 00:32:08.803435 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.803507 kubelet[2505]: W1108 00:32:08.803468 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.803507 kubelet[2505]: E1108 00:32:08.803499 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.804013 kubelet[2505]: E1108 00:32:08.803946 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.804013 kubelet[2505]: W1108 00:32:08.804009 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.804124 kubelet[2505]: E1108 00:32:08.804029 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.804420 kubelet[2505]: E1108 00:32:08.804386 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.804420 kubelet[2505]: W1108 00:32:08.804413 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.804534 kubelet[2505]: E1108 00:32:08.804430 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.804790 kubelet[2505]: E1108 00:32:08.804772 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.804845 kubelet[2505]: W1108 00:32:08.804800 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.804845 kubelet[2505]: E1108 00:32:08.804829 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.805184 kubelet[2505]: E1108 00:32:08.805163 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.805184 kubelet[2505]: W1108 00:32:08.805179 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.805267 kubelet[2505]: E1108 00:32:08.805201 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:08.805531 kubelet[2505]: E1108 00:32:08.805507 2505 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:32:08.805583 kubelet[2505]: W1108 00:32:08.805531 2505 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:32:08.805583 kubelet[2505]: E1108 00:32:08.805545 2505 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:32:09.201168 containerd[1494]: time="2025-11-08T00:32:09.201121086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:09.202215 containerd[1494]: time="2025-11-08T00:32:09.202176028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:32:09.203448 containerd[1494]: time="2025-11-08T00:32:09.203412914Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:09.205547 containerd[1494]: time="2025-11-08T00:32:09.205510305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:09.206111 containerd[1494]: time="2025-11-08T00:32:09.206077096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.312631232s" Nov 8 00:32:09.206111 containerd[1494]: time="2025-11-08T00:32:09.206107513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:32:09.207874 containerd[1494]: time="2025-11-08T00:32:09.207850675Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:32:09.223309 containerd[1494]: time="2025-11-08T00:32:09.223249105Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93\"" Nov 8 00:32:09.223788 containerd[1494]: time="2025-11-08T00:32:09.223678666Z" level=info msg="StartContainer for \"5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93\"" Nov 8 00:32:09.253098 systemd[1]: Started cri-containerd-5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93.scope - libcontainer container 5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93. Nov 8 00:32:09.283808 containerd[1494]: time="2025-11-08T00:32:09.283675869Z" level=info msg="StartContainer for \"5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93\" returns successfully" Nov 8 00:32:09.294647 systemd[1]: cri-containerd-5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93.scope: Deactivated successfully. Nov 8 00:32:09.356657 containerd[1494]: time="2025-11-08T00:32:09.356587651Z" level=info msg="shim disconnected" id=5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93 namespace=k8s.io Nov 8 00:32:09.356657 containerd[1494]: time="2025-11-08T00:32:09.356650931Z" level=warning msg="cleaning up after shim disconnected" id=5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93 namespace=k8s.io Nov 8 00:32:09.356657 containerd[1494]: time="2025-11-08T00:32:09.356660579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:32:09.649184 kubelet[2505]: E1108 00:32:09.649129 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:09.710062 kubelet[2505]: E1108 00:32:09.709938 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:09.711245 kubelet[2505]: E1108 00:32:09.710322 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:09.723866 containerd[1494]: time="2025-11-08T00:32:09.723459770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:32:09.905621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ad120967371f8359b9e197aa87c53491ac0df9f1470607235e91ac37d48bb93-rootfs.mount: Deactivated successfully. Nov 8 00:32:10.711314 kubelet[2505]: E1108 00:32:10.711268 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:11.649489 kubelet[2505]: E1108 00:32:11.649429 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:12.736680 containerd[1494]: time="2025-11-08T00:32:12.736622268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:12.737452 containerd[1494]: time="2025-11-08T00:32:12.737385088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:32:12.738472 containerd[1494]: time="2025-11-08T00:32:12.738423476Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:12.742560 containerd[1494]: time="2025-11-08T00:32:12.742385067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:12.743555 containerd[1494]: time="2025-11-08T00:32:12.743523725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.020011797s" Nov 8 00:32:12.743555 containerd[1494]: time="2025-11-08T00:32:12.743556607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:32:12.745471 containerd[1494]: time="2025-11-08T00:32:12.745431594Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:32:12.761975 containerd[1494]: time="2025-11-08T00:32:12.761896829Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7\"" Nov 8 00:32:12.762613 containerd[1494]: time="2025-11-08T00:32:12.762580899Z" level=info msg="StartContainer for \"1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7\"" Nov 8 00:32:12.797289 systemd[1]: Started cri-containerd-1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7.scope - libcontainer container 1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7. Nov 8 00:32:13.713793 containerd[1494]: time="2025-11-08T00:32:13.713732195Z" level=info msg="StartContainer for \"1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7\" returns successfully" Nov 8 00:32:13.726422 kubelet[2505]: E1108 00:32:13.726130 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:13.748658 kubelet[2505]: E1108 00:32:13.748606 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:14.429537 systemd[1]: cri-containerd-1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7.scope: Deactivated successfully. Nov 8 00:32:14.452029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7-rootfs.mount: Deactivated successfully. Nov 8 00:32:14.456663 containerd[1494]: time="2025-11-08T00:32:14.456593539Z" level=info msg="shim disconnected" id=1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7 namespace=k8s.io Nov 8 00:32:14.456663 containerd[1494]: time="2025-11-08T00:32:14.456647661Z" level=warning msg="cleaning up after shim disconnected" id=1731c966a4eda199a27f1a494bfa486e51d63d49c4e8989749554d03c8f95db7 namespace=k8s.io Nov 8 00:32:14.456663 containerd[1494]: time="2025-11-08T00:32:14.456657109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:32:14.489176 kubelet[2505]: I1108 00:32:14.488869 2505 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:32:14.525177 systemd[1]: Created slice kubepods-besteffort-pod11140329_f7e3_441b_979e_c0443fd17e9d.slice - libcontainer container kubepods-besteffort-pod11140329_f7e3_441b_979e_c0443fd17e9d.slice. Nov 8 00:32:14.531430 systemd[1]: Created slice kubepods-besteffort-pod5d526354_b399_458e_b2b3_be2f314ae23a.slice - libcontainer container kubepods-besteffort-pod5d526354_b399_458e_b2b3_be2f314ae23a.slice. Nov 8 00:32:14.539154 systemd[1]: Created slice kubepods-besteffort-pod417d4903_c711_42c7_9ef7_788a2e600314.slice - libcontainer container kubepods-besteffort-pod417d4903_c711_42c7_9ef7_788a2e600314.slice. Nov 8 00:32:14.545563 kubelet[2505]: I1108 00:32:14.545284 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/417d4903-c711-42c7-9ef7-788a2e600314-config\") pod \"goldmane-666569f655-lzqlc\" (UID: \"417d4903-c711-42c7-9ef7-788a2e600314\") " pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:14.545563 kubelet[2505]: I1108 00:32:14.545313 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/417d4903-c711-42c7-9ef7-788a2e600314-goldmane-key-pair\") pod \"goldmane-666569f655-lzqlc\" (UID: \"417d4903-c711-42c7-9ef7-788a2e600314\") " pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:14.545563 kubelet[2505]: I1108 00:32:14.545346 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08a2c12f-2341-4bf8-ac6e-959cce58e330-config-volume\") pod \"coredns-668d6bf9bc-h8h25\" (UID: \"08a2c12f-2341-4bf8-ac6e-959cce58e330\") " pod="kube-system/coredns-668d6bf9bc-h8h25" Nov 8 00:32:14.545563 kubelet[2505]: I1108 00:32:14.545367 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-backend-key-pair\") pod \"whisker-8696ddb695-64p2t\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " pod="calico-system/whisker-8696ddb695-64p2t" Nov 8 00:32:14.545563 kubelet[2505]: I1108 00:32:14.545383 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5npvc\" (UniqueName: \"kubernetes.io/projected/11140329-f7e3-441b-979e-c0443fd17e9d-kube-api-access-5npvc\") pod \"whisker-8696ddb695-64p2t\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " pod="calico-system/whisker-8696ddb695-64p2t" Nov 8 00:32:14.545490 systemd[1]: Created slice kubepods-besteffort-podb48386d0_fbeb_4205_a9d9_bf52a9eeb9e9.slice - libcontainer container kubepods-besteffort-podb48386d0_fbeb_4205_a9d9_bf52a9eeb9e9.slice. Nov 8 00:32:14.545857 kubelet[2505]: I1108 00:32:14.545406 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmnx\" (UniqueName: \"kubernetes.io/projected/123d6cb1-1650-4283-829f-77b1235c57a8-kube-api-access-blmnx\") pod \"coredns-668d6bf9bc-58f4z\" (UID: \"123d6cb1-1650-4283-829f-77b1235c57a8\") " pod="kube-system/coredns-668d6bf9bc-58f4z" Nov 8 00:32:14.545857 kubelet[2505]: I1108 00:32:14.545423 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-ca-bundle\") pod \"whisker-8696ddb695-64p2t\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " pod="calico-system/whisker-8696ddb695-64p2t" Nov 8 00:32:14.545857 kubelet[2505]: I1108 00:32:14.545439 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9-calico-apiserver-certs\") pod \"calico-apiserver-6bdc4f9f54-592vx\" (UID: \"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9\") " pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" Nov 8 00:32:14.545857 kubelet[2505]: I1108 00:32:14.545459 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51a57672-a43f-42d3-abfb-83cef5f71936-calico-apiserver-certs\") pod \"calico-apiserver-6bdc4f9f54-9vq7q\" (UID: \"51a57672-a43f-42d3-abfb-83cef5f71936\") " pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" Nov 8 00:32:14.545857 kubelet[2505]: I1108 00:32:14.545475 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/417d4903-c711-42c7-9ef7-788a2e600314-goldmane-ca-bundle\") pod \"goldmane-666569f655-lzqlc\" (UID: \"417d4903-c711-42c7-9ef7-788a2e600314\") " pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:14.546002 kubelet[2505]: I1108 00:32:14.545493 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/123d6cb1-1650-4283-829f-77b1235c57a8-config-volume\") pod \"coredns-668d6bf9bc-58f4z\" (UID: \"123d6cb1-1650-4283-829f-77b1235c57a8\") " pod="kube-system/coredns-668d6bf9bc-58f4z" Nov 8 00:32:14.546002 kubelet[2505]: I1108 00:32:14.545676 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhfr\" (UniqueName: \"kubernetes.io/projected/417d4903-c711-42c7-9ef7-788a2e600314-kube-api-access-xqhfr\") pod \"goldmane-666569f655-lzqlc\" (UID: \"417d4903-c711-42c7-9ef7-788a2e600314\") " pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:14.546002 kubelet[2505]: I1108 00:32:14.545834 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5snz\" (UniqueName: \"kubernetes.io/projected/b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9-kube-api-access-p5snz\") pod \"calico-apiserver-6bdc4f9f54-592vx\" (UID: \"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9\") " pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" Nov 8 00:32:14.546002 kubelet[2505]: I1108 00:32:14.545855 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrd4x\" (UniqueName: \"kubernetes.io/projected/51a57672-a43f-42d3-abfb-83cef5f71936-kube-api-access-lrd4x\") pod \"calico-apiserver-6bdc4f9f54-9vq7q\" (UID: \"51a57672-a43f-42d3-abfb-83cef5f71936\") " pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" Nov 8 00:32:14.546002 kubelet[2505]: I1108 00:32:14.545870 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f797d\" (UniqueName: \"kubernetes.io/projected/08a2c12f-2341-4bf8-ac6e-959cce58e330-kube-api-access-f797d\") pod \"coredns-668d6bf9bc-h8h25\" (UID: \"08a2c12f-2341-4bf8-ac6e-959cce58e330\") " pod="kube-system/coredns-668d6bf9bc-h8h25" Nov 8 00:32:14.546132 kubelet[2505]: I1108 00:32:14.545900 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdxs\" (UniqueName: \"kubernetes.io/projected/5d526354-b399-458e-b2b3-be2f314ae23a-kube-api-access-xwdxs\") pod \"calico-kube-controllers-7689cf9c54-vlx96\" (UID: \"5d526354-b399-458e-b2b3-be2f314ae23a\") " pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" Nov 8 00:32:14.546132 kubelet[2505]: I1108 00:32:14.545922 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d526354-b399-458e-b2b3-be2f314ae23a-tigera-ca-bundle\") pod \"calico-kube-controllers-7689cf9c54-vlx96\" (UID: \"5d526354-b399-458e-b2b3-be2f314ae23a\") " pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" Nov 8 00:32:14.553206 systemd[1]: Created slice kubepods-besteffort-pod51a57672_a43f_42d3_abfb_83cef5f71936.slice - libcontainer container kubepods-besteffort-pod51a57672_a43f_42d3_abfb_83cef5f71936.slice. Nov 8 00:32:14.557275 systemd[1]: Created slice kubepods-burstable-pod123d6cb1_1650_4283_829f_77b1235c57a8.slice - libcontainer container kubepods-burstable-pod123d6cb1_1650_4283_829f_77b1235c57a8.slice. Nov 8 00:32:14.564379 systemd[1]: Created slice kubepods-burstable-pod08a2c12f_2341_4bf8_ac6e_959cce58e330.slice - libcontainer container kubepods-burstable-pod08a2c12f_2341_4bf8_ac6e_959cce58e330.slice. Nov 8 00:32:14.751429 kubelet[2505]: E1108 00:32:14.751293 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:14.752420 containerd[1494]: time="2025-11-08T00:32:14.752392479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:32:14.829984 containerd[1494]: time="2025-11-08T00:32:14.829906865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8696ddb695-64p2t,Uid:11140329-f7e3-441b-979e-c0443fd17e9d,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:14.837602 containerd[1494]: time="2025-11-08T00:32:14.837554588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689cf9c54-vlx96,Uid:5d526354-b399-458e-b2b3-be2f314ae23a,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:14.842134 containerd[1494]: time="2025-11-08T00:32:14.842107019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lzqlc,Uid:417d4903-c711-42c7-9ef7-788a2e600314,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:14.848834 containerd[1494]: time="2025-11-08T00:32:14.848804382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-592vx,Uid:b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:32:14.856501 containerd[1494]: time="2025-11-08T00:32:14.856445654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-9vq7q,Uid:51a57672-a43f-42d3-abfb-83cef5f71936,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:32:14.860030 kubelet[2505]: E1108 00:32:14.859993 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:14.861578 containerd[1494]: time="2025-11-08T00:32:14.861548191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58f4z,Uid:123d6cb1-1650-4283-829f-77b1235c57a8,Namespace:kube-system,Attempt:0,}" Nov 8 00:32:14.867367 kubelet[2505]: E1108 00:32:14.867316 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:14.867888 containerd[1494]: time="2025-11-08T00:32:14.867836833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8h25,Uid:08a2c12f-2341-4bf8-ac6e-959cce58e330,Namespace:kube-system,Attempt:0,}" Nov 8 00:32:15.000513 containerd[1494]: time="2025-11-08T00:32:15.000351281Z" level=error msg="Failed to destroy network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.002314 containerd[1494]: time="2025-11-08T00:32:15.002086881Z" level=error msg="encountered an error cleaning up failed sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.002314 containerd[1494]: time="2025-11-08T00:32:15.002141364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689cf9c54-vlx96,Uid:5d526354-b399-458e-b2b3-be2f314ae23a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.021825 kubelet[2505]: E1108 00:32:15.021702 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.024702 containerd[1494]: time="2025-11-08T00:32:15.023406560Z" level=error msg="Failed to destroy network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.025253 containerd[1494]: time="2025-11-08T00:32:15.025222893Z" level=error msg="encountered an error cleaning up failed sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.025299 containerd[1494]: time="2025-11-08T00:32:15.025278058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8696ddb695-64p2t,Uid:11140329-f7e3-441b-979e-c0443fd17e9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.025463 kubelet[2505]: E1108 00:32:15.025426 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.029768 kubelet[2505]: E1108 00:32:15.029709 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" Nov 8 00:32:15.029899 kubelet[2505]: E1108 00:32:15.029876 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" Nov 8 00:32:15.030082 kubelet[2505]: E1108 00:32:15.029983 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8696ddb695-64p2t" Nov 8 00:32:15.030563 kubelet[2505]: E1108 00:32:15.030088 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8696ddb695-64p2t" Nov 8 00:32:15.030709 kubelet[2505]: E1108 00:32:15.030574 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8696ddb695-64p2t_calico-system(11140329-f7e3-441b-979e-c0443fd17e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8696ddb695-64p2t_calico-system(11140329-f7e3-441b-979e-c0443fd17e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8696ddb695-64p2t" podUID="11140329-f7e3-441b-979e-c0443fd17e9d" Nov 8 00:32:15.030709 kubelet[2505]: E1108 00:32:15.030643 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7689cf9c54-vlx96_calico-system(5d526354-b399-458e-b2b3-be2f314ae23a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7689cf9c54-vlx96_calico-system(5d526354-b399-458e-b2b3-be2f314ae23a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:32:15.045425 containerd[1494]: time="2025-11-08T00:32:15.045366166Z" level=error msg="Failed to destroy network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.048115 containerd[1494]: time="2025-11-08T00:32:15.048071884Z" level=error msg="encountered an error cleaning up failed sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.048177 containerd[1494]: time="2025-11-08T00:32:15.048131566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lzqlc,Uid:417d4903-c711-42c7-9ef7-788a2e600314,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.048492 kubelet[2505]: E1108 00:32:15.048357 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.048492 kubelet[2505]: E1108 00:32:15.048418 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:15.048492 kubelet[2505]: E1108 00:32:15.048440 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lzqlc" Nov 8 00:32:15.048734 kubelet[2505]: E1108 00:32:15.048672 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lzqlc_calico-system(417d4903-c711-42c7-9ef7-788a2e600314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lzqlc_calico-system(417d4903-c711-42c7-9ef7-788a2e600314)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:15.056346 containerd[1494]: time="2025-11-08T00:32:15.056168269Z" level=error msg="Failed to destroy network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.056672 containerd[1494]: time="2025-11-08T00:32:15.056646099Z" level=error msg="encountered an error cleaning up failed sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.056746 containerd[1494]: time="2025-11-08T00:32:15.056688729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-9vq7q,Uid:51a57672-a43f-42d3-abfb-83cef5f71936,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.056882 containerd[1494]: time="2025-11-08T00:32:15.056774491Z" level=error msg="Failed to destroy network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.056939 kubelet[2505]: E1108 00:32:15.056898 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.057011 kubelet[2505]: E1108 00:32:15.056967 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" Nov 8 00:32:15.057011 kubelet[2505]: E1108 00:32:15.056988 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" Nov 8 00:32:15.057215 kubelet[2505]: E1108 00:32:15.057045 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bdc4f9f54-9vq7q_calico-apiserver(51a57672-a43f-42d3-abfb-83cef5f71936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bdc4f9f54-9vq7q_calico-apiserver(51a57672-a43f-42d3-abfb-83cef5f71936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:15.059289 containerd[1494]: time="2025-11-08T00:32:15.059237241Z" level=error msg="encountered an error cleaning up failed sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.059289 containerd[1494]: time="2025-11-08T00:32:15.059299398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-592vx,Uid:b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.059564 kubelet[2505]: E1108 00:32:15.059519 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.059641 kubelet[2505]: E1108 00:32:15.059573 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" Nov 8 00:32:15.059641 kubelet[2505]: E1108 00:32:15.059594 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" Nov 8 00:32:15.059698 kubelet[2505]: E1108 00:32:15.059636 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bdc4f9f54-592vx_calico-apiserver(b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bdc4f9f54-592vx_calico-apiserver(b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:15.075922 containerd[1494]: time="2025-11-08T00:32:15.075839983Z" level=error msg="Failed to destroy network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.076705 containerd[1494]: time="2025-11-08T00:32:15.076660870Z" level=error msg="Failed to destroy network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.077081 containerd[1494]: time="2025-11-08T00:32:15.077056555Z" level=error msg="encountered an error cleaning up failed sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.077132 containerd[1494]: time="2025-11-08T00:32:15.077110497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58f4z,Uid:123d6cb1-1650-4283-829f-77b1235c57a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.077365 kubelet[2505]: E1108 00:32:15.077320 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.077408 kubelet[2505]: E1108 00:32:15.077384 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58f4z" Nov 8 00:32:15.077408 kubelet[2505]: E1108 00:32:15.077404 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58f4z" Nov 8 00:32:15.077483 kubelet[2505]: E1108 00:32:15.077444 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-58f4z_kube-system(123d6cb1-1650-4283-829f-77b1235c57a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-58f4z_kube-system(123d6cb1-1650-4283-829f-77b1235c57a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-58f4z" podUID="123d6cb1-1650-4283-829f-77b1235c57a8" Nov 8 00:32:15.096688 containerd[1494]: time="2025-11-08T00:32:15.096610377Z" level=error msg="encountered an error cleaning up failed sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.096688 containerd[1494]: time="2025-11-08T00:32:15.096678376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8h25,Uid:08a2c12f-2341-4bf8-ac6e-959cce58e330,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.096890 kubelet[2505]: E1108 00:32:15.096860 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.096944 kubelet[2505]: E1108 00:32:15.096908 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h8h25" Nov 8 00:32:15.096944 kubelet[2505]: E1108 00:32:15.096928 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-h8h25" Nov 8 00:32:15.097010 kubelet[2505]: E1108 00:32:15.096986 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-h8h25_kube-system(08a2c12f-2341-4bf8-ac6e-959cce58e330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-h8h25_kube-system(08a2c12f-2341-4bf8-ac6e-959cce58e330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h8h25" podUID="08a2c12f-2341-4bf8-ac6e-959cce58e330" Nov 8 00:32:15.455980 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61-shm.mount: Deactivated successfully. Nov 8 00:32:15.665608 systemd[1]: Created slice kubepods-besteffort-pod88835561_0fd8_4963_bbc3_b0aaf46c9820.slice - libcontainer container kubepods-besteffort-pod88835561_0fd8_4963_bbc3_b0aaf46c9820.slice. Nov 8 00:32:15.668665 containerd[1494]: time="2025-11-08T00:32:15.668621132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkl4b,Uid:88835561-0fd8-4963-bbc3-b0aaf46c9820,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:15.729628 containerd[1494]: time="2025-11-08T00:32:15.729442027Z" level=error msg="Failed to destroy network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.729983 containerd[1494]: time="2025-11-08T00:32:15.729933774Z" level=error msg="encountered an error cleaning up failed sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.730065 containerd[1494]: time="2025-11-08T00:32:15.730000809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkl4b,Uid:88835561-0fd8-4963-bbc3-b0aaf46c9820,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.730299 kubelet[2505]: E1108 00:32:15.730255 2505 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.730373 kubelet[2505]: E1108 00:32:15.730334 2505 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:15.730373 kubelet[2505]: E1108 00:32:15.730360 2505 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lkl4b" Nov 8 00:32:15.730422 kubelet[2505]: E1108 00:32:15.730405 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:15.732578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3-shm.mount: Deactivated successfully. Nov 8 00:32:15.753285 kubelet[2505]: I1108 00:32:15.753247 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:15.754992 kubelet[2505]: I1108 00:32:15.754947 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:15.757122 kubelet[2505]: I1108 00:32:15.756014 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:15.757269 kubelet[2505]: I1108 00:32:15.757227 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:15.788002 containerd[1494]: time="2025-11-08T00:32:15.787677110Z" level=info msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" Nov 8 00:32:15.788002 containerd[1494]: time="2025-11-08T00:32:15.787731023Z" level=info msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" Nov 8 00:32:15.788002 containerd[1494]: time="2025-11-08T00:32:15.787939135Z" level=info msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" Nov 8 00:32:15.788249 containerd[1494]: time="2025-11-08T00:32:15.788210035Z" level=info msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" Nov 8 00:32:15.789349 containerd[1494]: time="2025-11-08T00:32:15.789024609Z" level=info msg="Ensure that sandbox 300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80 in task-service has been cleanup successfully" Nov 8 00:32:15.789349 containerd[1494]: time="2025-11-08T00:32:15.789037795Z" level=info msg="Ensure that sandbox 80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39 in task-service has been cleanup successfully" Nov 8 00:32:15.789549 kubelet[2505]: I1108 00:32:15.789513 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:15.789854 containerd[1494]: time="2025-11-08T00:32:15.789045750Z" level=info msg="Ensure that sandbox aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1 in task-service has been cleanup successfully" Nov 8 00:32:15.790236 containerd[1494]: time="2025-11-08T00:32:15.790175638Z" level=info msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" Nov 8 00:32:15.791102 containerd[1494]: time="2025-11-08T00:32:15.790456207Z" level=info msg="Ensure that sandbox 5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3 in task-service has been cleanup successfully" Nov 8 00:32:15.796262 containerd[1494]: time="2025-11-08T00:32:15.789048505Z" level=info msg="Ensure that sandbox 94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61 in task-service has been cleanup successfully" Nov 8 00:32:15.801514 kubelet[2505]: I1108 00:32:15.800779 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:15.802634 containerd[1494]: time="2025-11-08T00:32:15.802597473Z" level=info msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" Nov 8 00:32:15.802898 containerd[1494]: time="2025-11-08T00:32:15.802879243Z" level=info msg="Ensure that sandbox db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87 in task-service has been cleanup successfully" Nov 8 00:32:15.805658 kubelet[2505]: I1108 00:32:15.805355 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:15.810136 containerd[1494]: time="2025-11-08T00:32:15.810115228Z" level=info msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" Nov 8 00:32:15.810478 containerd[1494]: time="2025-11-08T00:32:15.810459185Z" level=info msg="Ensure that sandbox 32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d in task-service has been cleanup successfully" Nov 8 00:32:15.815016 kubelet[2505]: I1108 00:32:15.814997 2505 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:15.817259 containerd[1494]: time="2025-11-08T00:32:15.817216818Z" level=info msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" Nov 8 00:32:15.818353 containerd[1494]: time="2025-11-08T00:32:15.818138414Z" level=info msg="Ensure that sandbox 58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274 in task-service has been cleanup successfully" Nov 8 00:32:15.867277 containerd[1494]: time="2025-11-08T00:32:15.867224630Z" level=error msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" failed" error="failed to destroy network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.867698 kubelet[2505]: E1108 00:32:15.867660 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:15.868126 kubelet[2505]: E1108 00:32:15.868069 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1"} Nov 8 00:32:15.868268 kubelet[2505]: E1108 00:32:15.868207 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.868268 kubelet[2505]: E1108 00:32:15.868235 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:15.880506 containerd[1494]: time="2025-11-08T00:32:15.880445991Z" level=error msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" failed" error="failed to destroy network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.880783 kubelet[2505]: E1108 00:32:15.880733 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:15.880861 kubelet[2505]: E1108 00:32:15.880795 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39"} Nov 8 00:32:15.880861 kubelet[2505]: E1108 00:32:15.880829 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d526354-b399-458e-b2b3-be2f314ae23a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.880967 kubelet[2505]: E1108 00:32:15.880872 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d526354-b399-458e-b2b3-be2f314ae23a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:32:15.882645 containerd[1494]: time="2025-11-08T00:32:15.882581254Z" level=error msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" failed" error="failed to destroy network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.882942 kubelet[2505]: E1108 00:32:15.882809 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:15.882942 kubelet[2505]: E1108 00:32:15.882860 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80"} Nov 8 00:32:15.882942 kubelet[2505]: E1108 00:32:15.882893 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51a57672-a43f-42d3-abfb-83cef5f71936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.882942 kubelet[2505]: E1108 00:32:15.882915 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51a57672-a43f-42d3-abfb-83cef5f71936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:15.888556 containerd[1494]: time="2025-11-08T00:32:15.888518862Z" level=error msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" failed" error="failed to destroy network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.888897 kubelet[2505]: E1108 00:32:15.888842 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:15.888976 kubelet[2505]: E1108 00:32:15.888900 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d"} Nov 8 00:32:15.888976 kubelet[2505]: E1108 00:32:15.888936 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"123d6cb1-1650-4283-829f-77b1235c57a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.889052 kubelet[2505]: E1108 00:32:15.888983 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"123d6cb1-1650-4283-829f-77b1235c57a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-58f4z" podUID="123d6cb1-1650-4283-829f-77b1235c57a8" Nov 8 00:32:15.941339 containerd[1494]: time="2025-11-08T00:32:15.941296853Z" level=error msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" failed" error="failed to destroy network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.941628 kubelet[2505]: E1108 00:32:15.941603 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:15.941758 kubelet[2505]: E1108 00:32:15.941741 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87"} Nov 8 00:32:15.941835 kubelet[2505]: E1108 00:32:15.941821 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08a2c12f-2341-4bf8-ac6e-959cce58e330\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.941994 kubelet[2505]: E1108 00:32:15.941935 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08a2c12f-2341-4bf8-ac6e-959cce58e330\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-h8h25" podUID="08a2c12f-2341-4bf8-ac6e-959cce58e330" Nov 8 00:32:15.943158 containerd[1494]: time="2025-11-08T00:32:15.942925632Z" level=error msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" failed" error="failed to destroy network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.943348 kubelet[2505]: E1108 00:32:15.943312 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:15.943381 kubelet[2505]: E1108 00:32:15.943352 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61"} Nov 8 00:32:15.943415 kubelet[2505]: E1108 00:32:15.943382 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11140329-f7e3-441b-979e-c0443fd17e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.943463 kubelet[2505]: E1108 00:32:15.943407 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11140329-f7e3-441b-979e-c0443fd17e9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8696ddb695-64p2t" podUID="11140329-f7e3-441b-979e-c0443fd17e9d" Nov 8 00:32:15.945423 containerd[1494]: time="2025-11-08T00:32:15.945347024Z" level=error msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" failed" error="failed to destroy network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.945629 kubelet[2505]: E1108 00:32:15.945591 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:15.945679 kubelet[2505]: E1108 00:32:15.945644 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274"} Nov 8 00:32:15.945703 kubelet[2505]: E1108 00:32:15.945682 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"417d4903-c711-42c7-9ef7-788a2e600314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.945755 kubelet[2505]: E1108 00:32:15.945705 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"417d4903-c711-42c7-9ef7-788a2e600314\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:15.958434 containerd[1494]: time="2025-11-08T00:32:15.958386402Z" level=error msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" failed" error="failed to destroy network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:32:15.958609 kubelet[2505]: E1108 00:32:15.958553 2505 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:15.958645 kubelet[2505]: E1108 00:32:15.958620 2505 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3"} Nov 8 00:32:15.958674 kubelet[2505]: E1108 00:32:15.958659 2505 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88835561-0fd8-4963-bbc3-b0aaf46c9820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:32:15.958722 kubelet[2505]: E1108 00:32:15.958684 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88835561-0fd8-4963-bbc3-b0aaf46c9820\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:22.655009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035201112.mount: Deactivated successfully. Nov 8 00:32:24.317521 containerd[1494]: time="2025-11-08T00:32:24.317428704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:24.318665 containerd[1494]: time="2025-11-08T00:32:24.318608802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:32:24.319887 containerd[1494]: time="2025-11-08T00:32:24.319850538Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:24.322170 containerd[1494]: time="2025-11-08T00:32:24.322127227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:32:24.322868 containerd[1494]: time="2025-11-08T00:32:24.322837653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.570262981s" Nov 8 00:32:24.322916 containerd[1494]: time="2025-11-08T00:32:24.322884050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:32:24.334559 containerd[1494]: time="2025-11-08T00:32:24.334500110Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:32:24.356940 containerd[1494]: time="2025-11-08T00:32:24.356891720Z" level=info msg="CreateContainer within sandbox \"3d76d18a9a9f8730845366bb2a7e3ea2dd7e91c93d1d1cd2f57f0540d9d02dbc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e41573685f8822e40b9eaf39dede4129f5173c12a9263c9a280dfeb5242a18bc\"" Nov 8 00:32:24.357597 containerd[1494]: time="2025-11-08T00:32:24.357553804Z" level=info msg="StartContainer for \"e41573685f8822e40b9eaf39dede4129f5173c12a9263c9a280dfeb5242a18bc\"" Nov 8 00:32:24.413184 systemd[1]: Started cri-containerd-e41573685f8822e40b9eaf39dede4129f5173c12a9263c9a280dfeb5242a18bc.scope - libcontainer container e41573685f8822e40b9eaf39dede4129f5173c12a9263c9a280dfeb5242a18bc. Nov 8 00:32:24.642144 containerd[1494]: time="2025-11-08T00:32:24.642061953Z" level=info msg="StartContainer for \"e41573685f8822e40b9eaf39dede4129f5173c12a9263c9a280dfeb5242a18bc\" returns successfully" Nov 8 00:32:24.686088 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:32:24.686243 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:32:24.770135 containerd[1494]: time="2025-11-08T00:32:24.769830382Z" level=info msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" Nov 8 00:32:24.837537 kubelet[2505]: E1108 00:32:24.836837 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.833 [INFO][3761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.833 [INFO][3761] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" iface="eth0" netns="/var/run/netns/cni-48682de2-1b40-c71b-30e4-3f743081fd4d" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.834 [INFO][3761] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" iface="eth0" netns="/var/run/netns/cni-48682de2-1b40-c71b-30e4-3f743081fd4d" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.834 [INFO][3761] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" iface="eth0" netns="/var/run/netns/cni-48682de2-1b40-c71b-30e4-3f743081fd4d" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.834 [INFO][3761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.834 [INFO][3761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.895 [INFO][3772] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.895 [INFO][3772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.896 [INFO][3772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.901 [WARNING][3772] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.901 [INFO][3772] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.903 [INFO][3772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:24.909765 containerd[1494]: 2025-11-08 00:32:24.906 [INFO][3761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:24.910423 containerd[1494]: time="2025-11-08T00:32:24.909868190Z" level=info msg="TearDown network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" successfully" Nov 8 00:32:24.910423 containerd[1494]: time="2025-11-08T00:32:24.909901983Z" level=info msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" returns successfully" Nov 8 00:32:25.007825 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:42562.service - OpenSSH per-connection server daemon (10.0.0.1:42562). Nov 8 00:32:25.013339 kubelet[2505]: I1108 00:32:25.011815 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-backend-key-pair\") pod \"11140329-f7e3-441b-979e-c0443fd17e9d\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " Nov 8 00:32:25.013339 kubelet[2505]: I1108 00:32:25.011857 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5npvc\" (UniqueName: \"kubernetes.io/projected/11140329-f7e3-441b-979e-c0443fd17e9d-kube-api-access-5npvc\") pod \"11140329-f7e3-441b-979e-c0443fd17e9d\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " Nov 8 00:32:25.013339 kubelet[2505]: I1108 00:32:25.011879 2505 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-ca-bundle\") pod \"11140329-f7e3-441b-979e-c0443fd17e9d\" (UID: \"11140329-f7e3-441b-979e-c0443fd17e9d\") " Nov 8 00:32:25.013339 kubelet[2505]: I1108 00:32:25.012364 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "11140329-f7e3-441b-979e-c0443fd17e9d" (UID: "11140329-f7e3-441b-979e-c0443fd17e9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:32:25.018842 kubelet[2505]: I1108 00:32:25.018469 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11140329-f7e3-441b-979e-c0443fd17e9d-kube-api-access-5npvc" (OuterVolumeSpecName: "kube-api-access-5npvc") pod "11140329-f7e3-441b-979e-c0443fd17e9d" (UID: "11140329-f7e3-441b-979e-c0443fd17e9d"). InnerVolumeSpecName "kube-api-access-5npvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:32:25.019142 kubelet[2505]: I1108 00:32:25.019095 2505 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "11140329-f7e3-441b-979e-c0443fd17e9d" (UID: "11140329-f7e3-441b-979e-c0443fd17e9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:32:25.072466 sshd[3787]: Accepted publickey for core from 10.0.0.1 port 42562 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:25.074286 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:25.078859 systemd-logind[1456]: New session 8 of user core. Nov 8 00:32:25.091129 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:32:25.112905 kubelet[2505]: I1108 00:32:25.112864 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:32:25.112905 kubelet[2505]: I1108 00:32:25.112892 2505 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5npvc\" (UniqueName: \"kubernetes.io/projected/11140329-f7e3-441b-979e-c0443fd17e9d-kube-api-access-5npvc\") on node \"localhost\" DevicePath \"\"" Nov 8 00:32:25.112905 kubelet[2505]: I1108 00:32:25.112902 2505 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11140329-f7e3-441b-979e-c0443fd17e9d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:32:25.218507 sshd[3787]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:25.222680 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:42562.service: Deactivated successfully. Nov 8 00:32:25.225359 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:32:25.226036 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:32:25.227024 systemd-logind[1456]: Removed session 8. Nov 8 00:32:25.330570 systemd[1]: run-netns-cni\x2d48682de2\x2d1b40\x2dc71b\x2d30e4\x2d3f743081fd4d.mount: Deactivated successfully. Nov 8 00:32:25.330715 systemd[1]: var-lib-kubelet-pods-11140329\x2df7e3\x2d441b\x2d979e\x2dc0443fd17e9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5npvc.mount: Deactivated successfully. Nov 8 00:32:25.330828 systemd[1]: var-lib-kubelet-pods-11140329\x2df7e3\x2d441b\x2d979e\x2dc0443fd17e9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:32:25.657398 systemd[1]: Removed slice kubepods-besteffort-pod11140329_f7e3_441b_979e_c0443fd17e9d.slice - libcontainer container kubepods-besteffort-pod11140329_f7e3_441b_979e_c0443fd17e9d.slice. Nov 8 00:32:25.836686 kubelet[2505]: I1108 00:32:25.836652 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:32:25.838367 kubelet[2505]: E1108 00:32:25.837713 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:25.855051 kubelet[2505]: I1108 00:32:25.854937 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-plrz8" podStartSLOduration=2.503532302 podStartE2EDuration="20.854920247s" podCreationTimestamp="2025-11-08 00:32:05 +0000 UTC" firstStartedPulling="2025-11-08 00:32:05.97225883 +0000 UTC m=+20.406477444" lastFinishedPulling="2025-11-08 00:32:24.323646785 +0000 UTC m=+38.757865389" observedRunningTime="2025-11-08 00:32:24.856983651 +0000 UTC m=+39.291202265" watchObservedRunningTime="2025-11-08 00:32:25.854920247 +0000 UTC m=+40.289138861" Nov 8 00:32:25.894587 systemd[1]: Created slice kubepods-besteffort-pod576c7105_d7be_4c5c_87aa_116f53250b26.slice - libcontainer container kubepods-besteffort-pod576c7105_d7be_4c5c_87aa_116f53250b26.slice. Nov 8 00:32:25.918079 kubelet[2505]: I1108 00:32:25.917609 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/576c7105-d7be-4c5c-87aa-116f53250b26-whisker-ca-bundle\") pod \"whisker-68ff55f559-tjknv\" (UID: \"576c7105-d7be-4c5c-87aa-116f53250b26\") " pod="calico-system/whisker-68ff55f559-tjknv" Nov 8 00:32:25.918079 kubelet[2505]: I1108 00:32:25.917677 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnszr\" (UniqueName: \"kubernetes.io/projected/576c7105-d7be-4c5c-87aa-116f53250b26-kube-api-access-mnszr\") pod \"whisker-68ff55f559-tjknv\" (UID: \"576c7105-d7be-4c5c-87aa-116f53250b26\") " pod="calico-system/whisker-68ff55f559-tjknv" Nov 8 00:32:25.918079 kubelet[2505]: I1108 00:32:25.917718 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/576c7105-d7be-4c5c-87aa-116f53250b26-whisker-backend-key-pair\") pod \"whisker-68ff55f559-tjknv\" (UID: \"576c7105-d7be-4c5c-87aa-116f53250b26\") " pod="calico-system/whisker-68ff55f559-tjknv" Nov 8 00:32:26.199366 containerd[1494]: time="2025-11-08T00:32:26.198838666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68ff55f559-tjknv,Uid:576c7105-d7be-4c5c-87aa-116f53250b26,Namespace:calico-system,Attempt:0,}" Nov 8 00:32:26.257994 kernel: bpftool[3951]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:32:26.341597 systemd-networkd[1407]: cali51a49bf92a1: Link UP Nov 8 00:32:26.342657 systemd-networkd[1407]: cali51a49bf92a1: Gained carrier Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.272 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68ff55f559--tjknv-eth0 whisker-68ff55f559- calico-system 576c7105-d7be-4c5c-87aa-116f53250b26 991 0 2025-11-08 00:32:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68ff55f559 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68ff55f559-tjknv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali51a49bf92a1 [] [] }} ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.272 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.302 [INFO][3955] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" HandleID="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Workload="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.302 [INFO][3955] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" HandleID="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Workload="localhost-k8s-whisker--68ff55f559--tjknv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68ff55f559-tjknv", "timestamp":"2025-11-08 00:32:26.302015346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.302 [INFO][3955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.302 [INFO][3955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.302 [INFO][3955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.309 [INFO][3955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.315 [INFO][3955] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.318 [INFO][3955] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.320 [INFO][3955] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.321 [INFO][3955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.321 [INFO][3955] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.322 [INFO][3955] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914 Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.326 [INFO][3955] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.329 [INFO][3955] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.329 [INFO][3955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" host="localhost" Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.329 [INFO][3955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:26.360174 containerd[1494]: 2025-11-08 00:32:26.329 [INFO][3955] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" HandleID="k8s-pod-network.82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Workload="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.333 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68ff55f559--tjknv-eth0", GenerateName:"whisker-68ff55f559-", Namespace:"calico-system", SelfLink:"", UID:"576c7105-d7be-4c5c-87aa-116f53250b26", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68ff55f559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68ff55f559-tjknv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali51a49bf92a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.334 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.334 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51a49bf92a1 ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.342 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.343 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68ff55f559--tjknv-eth0", GenerateName:"whisker-68ff55f559-", Namespace:"calico-system", SelfLink:"", UID:"576c7105-d7be-4c5c-87aa-116f53250b26", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68ff55f559", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914", Pod:"whisker-68ff55f559-tjknv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali51a49bf92a1", MAC:"12:80:b4:35:ef:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:26.361066 containerd[1494]: 2025-11-08 00:32:26.355 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914" Namespace="calico-system" Pod="whisker-68ff55f559-tjknv" WorkloadEndpoint="localhost-k8s-whisker--68ff55f559--tjknv-eth0" Nov 8 00:32:26.395522 containerd[1494]: time="2025-11-08T00:32:26.395397878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:26.395522 containerd[1494]: time="2025-11-08T00:32:26.395468892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:26.395522 containerd[1494]: time="2025-11-08T00:32:26.395502334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:26.395720 containerd[1494]: time="2025-11-08T00:32:26.395618583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:26.427201 systemd[1]: Started cri-containerd-82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914.scope - libcontainer container 82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914. Nov 8 00:32:26.440285 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:26.468743 containerd[1494]: time="2025-11-08T00:32:26.468623265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68ff55f559-tjknv,Uid:576c7105-d7be-4c5c-87aa-116f53250b26,Namespace:calico-system,Attempt:0,} returns sandbox id \"82c6462ffba883e9451a4539ec16af2bd89c3f63d54df2f12ac0174d3ed73914\"" Nov 8 00:32:26.472144 containerd[1494]: time="2025-11-08T00:32:26.472109988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:26.527166 systemd-networkd[1407]: vxlan.calico: Link UP Nov 8 00:32:26.527176 systemd-networkd[1407]: vxlan.calico: Gained carrier Nov 8 00:32:26.650640 containerd[1494]: time="2025-11-08T00:32:26.650576917Z" level=info msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.695 [INFO][4061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.695 [INFO][4061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" iface="eth0" netns="/var/run/netns/cni-9db9883f-9c10-91b1-152b-859262012125" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.695 [INFO][4061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" iface="eth0" netns="/var/run/netns/cni-9db9883f-9c10-91b1-152b-859262012125" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.696 [INFO][4061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" iface="eth0" netns="/var/run/netns/cni-9db9883f-9c10-91b1-152b-859262012125" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.696 [INFO][4061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.696 [INFO][4061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.719 [INFO][4069] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.719 [INFO][4069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.719 [INFO][4069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.725 [WARNING][4069] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.725 [INFO][4069] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.726 [INFO][4069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:26.734701 containerd[1494]: 2025-11-08 00:32:26.731 [INFO][4061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:26.735269 containerd[1494]: time="2025-11-08T00:32:26.734785086Z" level=info msg="TearDown network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" successfully" Nov 8 00:32:26.735269 containerd[1494]: time="2025-11-08T00:32:26.734812448Z" level=info msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" returns successfully" Nov 8 00:32:26.735474 containerd[1494]: time="2025-11-08T00:32:26.735454393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-9vq7q,Uid:51a57672-a43f-42d3-abfb-83cef5f71936,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:32:26.738562 systemd[1]: run-netns-cni\x2d9db9883f\x2d9c10\x2d91b1\x2d152b\x2d859262012125.mount: Deactivated successfully. Nov 8 00:32:26.842672 containerd[1494]: time="2025-11-08T00:32:26.842611355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:26.867751 containerd[1494]: time="2025-11-08T00:32:26.851384294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:26.868057 containerd[1494]: time="2025-11-08T00:32:26.851483560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:26.868350 kubelet[2505]: E1108 00:32:26.868317 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:26.868909 kubelet[2505]: E1108 00:32:26.868782 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:26.869158 kubelet[2505]: E1108 00:32:26.869084 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c461a51f27f46ffb1d37efc97264654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:26.871522 containerd[1494]: time="2025-11-08T00:32:26.871478581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:26.886144 systemd-networkd[1407]: calib9d8307f110: Link UP Nov 8 00:32:26.886410 systemd-networkd[1407]: calib9d8307f110: Gained carrier Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.795 [INFO][4086] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0 calico-apiserver-6bdc4f9f54- calico-apiserver 51a57672-a43f-42d3-abfb-83cef5f71936 1001 0 2025-11-08 00:32:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bdc4f9f54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bdc4f9f54-9vq7q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9d8307f110 [] [] }} ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.796 [INFO][4086] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.832 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" HandleID="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.833 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" HandleID="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042aee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bdc4f9f54-9vq7q", "timestamp":"2025-11-08 00:32:26.832842064 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.833 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.833 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.833 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.843 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.849 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.855 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.858 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.861 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.861 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.863 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432 Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.869 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.876 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.876 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" host="localhost" Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.876 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:26.900947 containerd[1494]: 2025-11-08 00:32:26.877 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" HandleID="k8s-pod-network.f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.882 [INFO][4086] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a57672-a43f-42d3-abfb-83cef5f71936", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bdc4f9f54-9vq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9d8307f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.882 [INFO][4086] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.882 [INFO][4086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9d8307f110 ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.885 [INFO][4086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.886 [INFO][4086] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a57672-a43f-42d3-abfb-83cef5f71936", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432", Pod:"calico-apiserver-6bdc4f9f54-9vq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9d8307f110", MAC:"6a:23:6f:3c:72:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:26.901534 containerd[1494]: 2025-11-08 00:32:26.896 [INFO][4086] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-9vq7q" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:26.922697 containerd[1494]: time="2025-11-08T00:32:26.921813746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:26.922697 containerd[1494]: time="2025-11-08T00:32:26.922577051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:26.922697 containerd[1494]: time="2025-11-08T00:32:26.922590426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:26.922697 containerd[1494]: time="2025-11-08T00:32:26.922677950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:26.951206 systemd[1]: Started cri-containerd-f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432.scope - libcontainer container f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432. Nov 8 00:32:26.966902 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:26.993704 containerd[1494]: time="2025-11-08T00:32:26.993476035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-9vq7q,Uid:51a57672-a43f-42d3-abfb-83cef5f71936,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432\"" Nov 8 00:32:27.174205 kubelet[2505]: I1108 00:32:27.174158 2505 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:32:27.174638 kubelet[2505]: E1108 00:32:27.174591 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:27.267173 containerd[1494]: time="2025-11-08T00:32:27.266896544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:27.325407 containerd[1494]: time="2025-11-08T00:32:27.325230857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:27.325407 containerd[1494]: time="2025-11-08T00:32:27.325341385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:27.325648 kubelet[2505]: E1108 00:32:27.325533 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:27.325648 kubelet[2505]: E1108 00:32:27.325594 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:27.326197 kubelet[2505]: E1108 00:32:27.326094 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:27.327695 kubelet[2505]: E1108 00:32:27.327277 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26" Nov 8 00:32:27.327831 containerd[1494]: time="2025-11-08T00:32:27.327367623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:27.652479 kubelet[2505]: I1108 00:32:27.652425 2505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11140329-f7e3-441b-979e-c0443fd17e9d" path="/var/lib/kubelet/pods/11140329-f7e3-441b-979e-c0443fd17e9d/volumes" Nov 8 00:32:27.748238 systemd-networkd[1407]: cali51a49bf92a1: Gained IPv6LL Nov 8 00:32:27.813549 containerd[1494]: time="2025-11-08T00:32:27.813475926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:27.816393 containerd[1494]: time="2025-11-08T00:32:27.816351209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:27.816512 containerd[1494]: time="2025-11-08T00:32:27.816424357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:27.816677 kubelet[2505]: E1108 00:32:27.816613 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:27.816677 kubelet[2505]: E1108 00:32:27.816678 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:27.816911 kubelet[2505]: E1108 00:32:27.816871 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrd4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-9vq7q_calico-apiserver(51a57672-a43f-42d3-abfb-83cef5f71936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:27.818095 kubelet[2505]: E1108 00:32:27.818065 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:27.845338 kubelet[2505]: E1108 00:32:27.845275 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:27.845791 kubelet[2505]: E1108 00:32:27.845741 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26" Nov 8 00:32:28.132380 systemd-networkd[1407]: vxlan.calico: Gained IPv6LL Nov 8 00:32:28.580326 systemd-networkd[1407]: calib9d8307f110: Gained IPv6LL Nov 8 00:32:28.650866 containerd[1494]: time="2025-11-08T00:32:28.650344322Z" level=info msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" Nov 8 00:32:28.650866 containerd[1494]: time="2025-11-08T00:32:28.650399987Z" level=info msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.703 [INFO][4260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.703 [INFO][4260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" iface="eth0" netns="/var/run/netns/cni-a123f908-29cd-482f-8898-8a9f4eaa0b40" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.703 [INFO][4260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" iface="eth0" netns="/var/run/netns/cni-a123f908-29cd-482f-8898-8a9f4eaa0b40" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.706 [INFO][4260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" iface="eth0" netns="/var/run/netns/cni-a123f908-29cd-482f-8898-8a9f4eaa0b40" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.707 [INFO][4260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.707 [INFO][4260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.730 [INFO][4281] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.730 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.731 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.736 [WARNING][4281] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.736 [INFO][4281] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.738 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:28.745057 containerd[1494]: 2025-11-08 00:32:28.741 [INFO][4260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:28.746125 containerd[1494]: time="2025-11-08T00:32:28.746093120Z" level=info msg="TearDown network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" successfully" Nov 8 00:32:28.746125 containerd[1494]: time="2025-11-08T00:32:28.746123306Z" level=info msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" returns successfully" Nov 8 00:32:28.746483 kubelet[2505]: E1108 00:32:28.746451 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:28.747199 containerd[1494]: time="2025-11-08T00:32:28.747145086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58f4z,Uid:123d6cb1-1650-4283-829f-77b1235c57a8,Namespace:kube-system,Attempt:1,}" Nov 8 00:32:28.750773 systemd[1]: run-netns-cni\x2da123f908\x2d29cd\x2d482f\x2d8898\x2d8a9f4eaa0b40.mount: Deactivated successfully. Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" iface="eth0" netns="/var/run/netns/cni-81f9891f-eb13-c0f4-cc1a-eadf40d75594" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" iface="eth0" netns="/var/run/netns/cni-81f9891f-eb13-c0f4-cc1a-eadf40d75594" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" iface="eth0" netns="/var/run/netns/cni-81f9891f-eb13-c0f4-cc1a-eadf40d75594" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.699 [INFO][4259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.731 [INFO][4275] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.731 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.738 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.743 [WARNING][4275] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.743 [INFO][4275] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.744 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:28.755853 containerd[1494]: 2025-11-08 00:32:28.752 [INFO][4259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:28.759393 containerd[1494]: time="2025-11-08T00:32:28.759365077Z" level=info msg="TearDown network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" successfully" Nov 8 00:32:28.759393 containerd[1494]: time="2025-11-08T00:32:28.759388471Z" level=info msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" returns successfully" Nov 8 00:32:28.759414 systemd[1]: run-netns-cni\x2d81f9891f\x2deb13\x2dc0f4\x2dcc1a\x2deadf40d75594.mount: Deactivated successfully. Nov 8 00:32:28.759663 kubelet[2505]: E1108 00:32:28.759640 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:28.760252 containerd[1494]: time="2025-11-08T00:32:28.760210536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8h25,Uid:08a2c12f-2341-4bf8-ac6e-959cce58e330,Namespace:kube-system,Attempt:1,}" Nov 8 00:32:28.850159 kubelet[2505]: E1108 00:32:28.848587 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:28.880713 systemd-networkd[1407]: califcc1381bdd7: Link UP Nov 8 00:32:28.881174 systemd-networkd[1407]: califcc1381bdd7: Gained carrier Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.801 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--58f4z-eth0 coredns-668d6bf9bc- kube-system 123d6cb1-1650-4283-829f-77b1235c57a8 1041 0 2025-11-08 00:31:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-58f4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califcc1381bdd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.801 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.830 [INFO][4319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" HandleID="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.830 [INFO][4319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" HandleID="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-58f4z", "timestamp":"2025-11-08 00:32:28.830195294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.830 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.830 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.830 [INFO][4319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.836 [INFO][4319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.842 [INFO][4319] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.849 [INFO][4319] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.853 [INFO][4319] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.856 [INFO][4319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.856 [INFO][4319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.862 [INFO][4319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2 Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.868 [INFO][4319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" host="localhost" Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:28.895173 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" HandleID="k8s-pod-network.0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.876 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58f4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"123d6cb1-1650-4283-829f-77b1235c57a8", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-58f4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcc1381bdd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.877 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.877 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcc1381bdd7 ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.881 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.882 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58f4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"123d6cb1-1650-4283-829f-77b1235c57a8", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2", Pod:"coredns-668d6bf9bc-58f4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcc1381bdd7", MAC:"06:b6:73:c4:04:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:28.895927 containerd[1494]: 2025-11-08 00:32:28.891 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2" Namespace="kube-system" Pod="coredns-668d6bf9bc-58f4z" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:28.918177 containerd[1494]: time="2025-11-08T00:32:28.918035902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:28.918914 containerd[1494]: time="2025-11-08T00:32:28.918855893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:28.920003 containerd[1494]: time="2025-11-08T00:32:28.918938748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:28.920003 containerd[1494]: time="2025-11-08T00:32:28.919122794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:28.941224 systemd[1]: Started cri-containerd-0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2.scope - libcontainer container 0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2. Nov 8 00:32:28.959628 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:28.975134 systemd-networkd[1407]: cali46c4027348f: Link UP Nov 8 00:32:28.977518 systemd-networkd[1407]: cali46c4027348f: Gained carrier Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.819 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--h8h25-eth0 coredns-668d6bf9bc- kube-system 08a2c12f-2341-4bf8-ac6e-959cce58e330 1040 0 2025-11-08 00:31:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-h8h25 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46c4027348f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.819 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.859 [INFO][4326] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" HandleID="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.859 [INFO][4326] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" HandleID="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-h8h25", "timestamp":"2025-11-08 00:32:28.859313936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.859 [INFO][4326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.873 [INFO][4326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.937 [INFO][4326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.943 [INFO][4326] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.949 [INFO][4326] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.951 [INFO][4326] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.953 [INFO][4326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.953 [INFO][4326] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.954 [INFO][4326] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313 Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.958 [INFO][4326] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.967 [INFO][4326] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.967 [INFO][4326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" host="localhost" Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.967 [INFO][4326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:29.006046 containerd[1494]: 2025-11-08 00:32:28.967 [INFO][4326] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" HandleID="k8s-pod-network.91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:28.971 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h8h25-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08a2c12f-2341-4bf8-ac6e-959cce58e330", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-h8h25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c4027348f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:28.972 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:28.972 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46c4027348f ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:28.978 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:28.978 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h8h25-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08a2c12f-2341-4bf8-ac6e-959cce58e330", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313", Pod:"coredns-668d6bf9bc-h8h25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c4027348f", MAC:"26:aa:d0:db:26:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:29.006720 containerd[1494]: 2025-11-08 00:32:29.000 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313" Namespace="kube-system" Pod="coredns-668d6bf9bc-h8h25" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:29.006720 containerd[1494]: time="2025-11-08T00:32:29.006135777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58f4z,Uid:123d6cb1-1650-4283-829f-77b1235c57a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2\"" Nov 8 00:32:29.007660 kubelet[2505]: E1108 00:32:29.007621 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:29.010715 containerd[1494]: time="2025-11-08T00:32:29.010675805Z" level=info msg="CreateContainer within sandbox \"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:32:29.033424 containerd[1494]: time="2025-11-08T00:32:29.033379359Z" level=info msg="CreateContainer within sandbox \"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"899c915842105347cd3d91745bf72a5281771d191bc9bc1a58b3a04ef3cf2b84\"" Nov 8 00:32:29.034970 containerd[1494]: time="2025-11-08T00:32:29.034888664Z" level=info msg="StartContainer for \"899c915842105347cd3d91745bf72a5281771d191bc9bc1a58b3a04ef3cf2b84\"" Nov 8 00:32:29.042119 containerd[1494]: time="2025-11-08T00:32:29.038292169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:29.042119 containerd[1494]: time="2025-11-08T00:32:29.040735679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:29.042119 containerd[1494]: time="2025-11-08T00:32:29.040751559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:29.042119 containerd[1494]: time="2025-11-08T00:32:29.040834905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:29.067106 systemd[1]: Started cri-containerd-91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313.scope - libcontainer container 91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313. Nov 8 00:32:29.072457 systemd[1]: Started cri-containerd-899c915842105347cd3d91745bf72a5281771d191bc9bc1a58b3a04ef3cf2b84.scope - libcontainer container 899c915842105347cd3d91745bf72a5281771d191bc9bc1a58b3a04ef3cf2b84. Nov 8 00:32:29.087681 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:29.104490 containerd[1494]: time="2025-11-08T00:32:29.104324003Z" level=info msg="StartContainer for \"899c915842105347cd3d91745bf72a5281771d191bc9bc1a58b3a04ef3cf2b84\" returns successfully" Nov 8 00:32:29.127277 containerd[1494]: time="2025-11-08T00:32:29.127215910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-h8h25,Uid:08a2c12f-2341-4bf8-ac6e-959cce58e330,Namespace:kube-system,Attempt:1,} returns sandbox id \"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313\"" Nov 8 00:32:29.128445 kubelet[2505]: E1108 00:32:29.128287 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:29.131114 containerd[1494]: time="2025-11-08T00:32:29.130747605Z" level=info msg="CreateContainer within sandbox \"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:32:29.150914 containerd[1494]: time="2025-11-08T00:32:29.150837819Z" level=info msg="CreateContainer within sandbox \"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43a99d353fadf4b760c5978d5e3b6e7908ad87bc16583a9e310c1c45e2885c29\"" Nov 8 00:32:29.151733 containerd[1494]: time="2025-11-08T00:32:29.151676044Z" level=info msg="StartContainer for \"43a99d353fadf4b760c5978d5e3b6e7908ad87bc16583a9e310c1c45e2885c29\"" Nov 8 00:32:29.192289 systemd[1]: Started cri-containerd-43a99d353fadf4b760c5978d5e3b6e7908ad87bc16583a9e310c1c45e2885c29.scope - libcontainer container 43a99d353fadf4b760c5978d5e3b6e7908ad87bc16583a9e310c1c45e2885c29. Nov 8 00:32:29.227691 containerd[1494]: time="2025-11-08T00:32:29.227632734Z" level=info msg="StartContainer for \"43a99d353fadf4b760c5978d5e3b6e7908ad87bc16583a9e310c1c45e2885c29\" returns successfully" Nov 8 00:32:29.651405 containerd[1494]: time="2025-11-08T00:32:29.650986061Z" level=info msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" Nov 8 00:32:29.651405 containerd[1494]: time="2025-11-08T00:32:29.651073786Z" level=info msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" Nov 8 00:32:29.651899 containerd[1494]: time="2025-11-08T00:32:29.651463748Z" level=info msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" iface="eth0" netns="/var/run/netns/cni-84ceb9ca-61bb-9b17-71bd-05b86260cb8d" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" iface="eth0" netns="/var/run/netns/cni-84ceb9ca-61bb-9b17-71bd-05b86260cb8d" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" iface="eth0" netns="/var/run/netns/cni-84ceb9ca-61bb-9b17-71bd-05b86260cb8d" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.803 [INFO][4573] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.803 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.803 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.809 [WARNING][4573] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.809 [INFO][4573] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.811 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:29.817520 containerd[1494]: 2025-11-08 00:32:29.813 [INFO][4546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:29.818100 containerd[1494]: time="2025-11-08T00:32:29.817702973Z" level=info msg="TearDown network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" successfully" Nov 8 00:32:29.818100 containerd[1494]: time="2025-11-08T00:32:29.817732449Z" level=info msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" returns successfully" Nov 8 00:32:29.818475 containerd[1494]: time="2025-11-08T00:32:29.818447452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkl4b,Uid:88835561-0fd8-4963-bbc3-b0aaf46c9820,Namespace:calico-system,Attempt:1,}" Nov 8 00:32:29.820806 systemd[1]: run-netns-cni\x2d84ceb9ca\x2d61bb\x2d9b17\x2d71bd\x2d05b86260cb8d.mount: Deactivated successfully. Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.774 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.774 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" iface="eth0" netns="/var/run/netns/cni-b432fa21-ac01-846d-f755-659123a38e62" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.775 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" iface="eth0" netns="/var/run/netns/cni-b432fa21-ac01-846d-f755-659123a38e62" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" iface="eth0" netns="/var/run/netns/cni-b432fa21-ac01-846d-f755-659123a38e62" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.805 [INFO][4571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.805 [INFO][4571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.811 [INFO][4571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.816 [WARNING][4571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.816 [INFO][4571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.820 [INFO][4571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:29.827010 containerd[1494]: 2025-11-08 00:32:29.824 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:29.828265 containerd[1494]: time="2025-11-08T00:32:29.828234980Z" level=info msg="TearDown network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" successfully" Nov 8 00:32:29.829158 containerd[1494]: time="2025-11-08T00:32:29.829023742Z" level=info msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" returns successfully" Nov 8 00:32:29.830258 containerd[1494]: time="2025-11-08T00:32:29.830218256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lzqlc,Uid:417d4903-c711-42c7-9ef7-788a2e600314,Namespace:calico-system,Attempt:1,}" Nov 8 00:32:29.831369 systemd[1]: run-netns-cni\x2db432fa21\x2dac01\x2d846d\x2df755\x2d659123a38e62.mount: Deactivated successfully. Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.775 [INFO][4547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.775 [INFO][4547] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" iface="eth0" netns="/var/run/netns/cni-abafd3df-edc2-deac-31df-047a58720b49" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4547] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" iface="eth0" netns="/var/run/netns/cni-abafd3df-edc2-deac-31df-047a58720b49" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4547] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" iface="eth0" netns="/var/run/netns/cni-abafd3df-edc2-deac-31df-047a58720b49" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.776 [INFO][4547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.777 [INFO][4547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.810 [INFO][4574] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.810 [INFO][4574] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.820 [INFO][4574] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.825 [WARNING][4574] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.825 [INFO][4574] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.827 [INFO][4574] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:29.835208 containerd[1494]: 2025-11-08 00:32:29.831 [INFO][4547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:29.836104 containerd[1494]: time="2025-11-08T00:32:29.836059961Z" level=info msg="TearDown network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" successfully" Nov 8 00:32:29.836104 containerd[1494]: time="2025-11-08T00:32:29.836091059Z" level=info msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" returns successfully" Nov 8 00:32:29.836836 containerd[1494]: time="2025-11-08T00:32:29.836813777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689cf9c54-vlx96,Uid:5d526354-b399-458e-b2b3-be2f314ae23a,Namespace:calico-system,Attempt:1,}" Nov 8 00:32:29.840193 systemd[1]: run-netns-cni\x2dabafd3df\x2dedc2\x2ddeac\x2d31df\x2d047a58720b49.mount: Deactivated successfully. Nov 8 00:32:29.853685 kubelet[2505]: E1108 00:32:29.853652 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:29.859185 kubelet[2505]: E1108 00:32:29.859098 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:29.905970 kubelet[2505]: I1108 00:32:29.905268 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-h8h25" podStartSLOduration=37.90524569 podStartE2EDuration="37.90524569s" podCreationTimestamp="2025-11-08 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:32:29.885049598 +0000 UTC m=+44.319268222" watchObservedRunningTime="2025-11-08 00:32:29.90524569 +0000 UTC m=+44.339464304" Nov 8 00:32:30.096082 systemd-networkd[1407]: cali5e2e5c4031a: Link UP Nov 8 00:32:30.096311 systemd-networkd[1407]: cali5e2e5c4031a: Gained carrier Nov 8 00:32:30.106832 kubelet[2505]: I1108 00:32:30.106702 2505 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58f4z" podStartSLOduration=38.106679941 podStartE2EDuration="38.106679941s" podCreationTimestamp="2025-11-08 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:32:29.934504238 +0000 UTC m=+44.368722872" watchObservedRunningTime="2025-11-08 00:32:30.106679941 +0000 UTC m=+44.540898555" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:29.945 [INFO][4604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lzqlc-eth0 goldmane-666569f655- calico-system 417d4903-c711-42c7-9ef7-788a2e600314 1071 0 2025-11-08 00:32:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lzqlc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5e2e5c4031a [] [] }} ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:29.946 [INFO][4604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.004 [INFO][4638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" HandleID="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.004 [INFO][4638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" HandleID="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a35e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lzqlc", "timestamp":"2025-11-08 00:32:30.004425208 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.004 [INFO][4638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.004 [INFO][4638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.004 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.014 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.020 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.024 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.026 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.028 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.028 [INFO][4638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.029 [INFO][4638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982 Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.034 [INFO][4638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.086 [INFO][4638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.087 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" host="localhost" Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.087 [INFO][4638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:30.110042 containerd[1494]: 2025-11-08 00:32:30.087 [INFO][4638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" HandleID="k8s-pod-network.36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.090 [INFO][4604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lzqlc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"417d4903-c711-42c7-9ef7-788a2e600314", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lzqlc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2e5c4031a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.090 [INFO][4604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.090 [INFO][4604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e2e5c4031a ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.094 [INFO][4604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.095 [INFO][4604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lzqlc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"417d4903-c711-42c7-9ef7-788a2e600314", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982", Pod:"goldmane-666569f655-lzqlc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2e5c4031a", MAC:"2a:47:c4:fe:2b:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.110733 containerd[1494]: 2025-11-08 00:32:30.106 [INFO][4604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982" Namespace="calico-system" Pod="goldmane-666569f655-lzqlc" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:30.151807 systemd-networkd[1407]: calic58c8ca9cbb: Link UP Nov 8 00:32:30.152361 containerd[1494]: time="2025-11-08T00:32:30.151177689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:30.152361 containerd[1494]: time="2025-11-08T00:32:30.151241500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:30.152361 containerd[1494]: time="2025-11-08T00:32:30.151253913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.152361 containerd[1494]: time="2025-11-08T00:32:30.151337370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.157094 systemd-networkd[1407]: calic58c8ca9cbb: Gained carrier Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:29.945 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lkl4b-eth0 csi-node-driver- calico-system 88835561-0fd8-4963-bbc3-b0aaf46c9820 1072 0 2025-11-08 00:32:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lkl4b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic58c8ca9cbb [] [] }} ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:29.948 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.006 [INFO][4642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" HandleID="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.011 [INFO][4642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" HandleID="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000446340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lkl4b", "timestamp":"2025-11-08 00:32:30.006378187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.011 [INFO][4642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.087 [INFO][4642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.087 [INFO][4642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.114 [INFO][4642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.120 [INFO][4642] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.125 [INFO][4642] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.127 [INFO][4642] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.129 [INFO][4642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.129 [INFO][4642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.130 [INFO][4642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0 Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.134 [INFO][4642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.141 [INFO][4642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.141 [INFO][4642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" host="localhost" Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.141 [INFO][4642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:30.175933 containerd[1494]: 2025-11-08 00:32:30.141 [INFO][4642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" HandleID="k8s-pod-network.a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.149 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkl4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88835561-0fd8-4963-bbc3-b0aaf46c9820", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lkl4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic58c8ca9cbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.149 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.149 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic58c8ca9cbb ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.158 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.159 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkl4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88835561-0fd8-4963-bbc3-b0aaf46c9820", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0", Pod:"csi-node-driver-lkl4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic58c8ca9cbb", MAC:"c2:24:9f:0e:0d:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.176477 containerd[1494]: 2025-11-08 00:32:30.173 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0" Namespace="calico-system" Pod="csi-node-driver-lkl4b" WorkloadEndpoint="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:30.178508 systemd[1]: Started cri-containerd-36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982.scope - libcontainer container 36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982. Nov 8 00:32:30.196792 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:30.201738 containerd[1494]: time="2025-11-08T00:32:30.199893090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:30.201738 containerd[1494]: time="2025-11-08T00:32:30.199967690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:30.201738 containerd[1494]: time="2025-11-08T00:32:30.199980935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.201738 containerd[1494]: time="2025-11-08T00:32:30.200062980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.219130 systemd[1]: Started cri-containerd-a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0.scope - libcontainer container a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0. Nov 8 00:32:30.227313 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:42578.service - OpenSSH per-connection server daemon (10.0.0.1:42578). Nov 8 00:32:30.233793 containerd[1494]: time="2025-11-08T00:32:30.233717549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lzqlc,Uid:417d4903-c711-42c7-9ef7-788a2e600314,Namespace:calico-system,Attempt:1,} returns sandbox id \"36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982\"" Nov 8 00:32:30.236666 containerd[1494]: time="2025-11-08T00:32:30.236553035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:30.240214 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:30.249498 systemd-networkd[1407]: cali46c4027348f: Gained IPv6LL Nov 8 00:32:30.260771 containerd[1494]: time="2025-11-08T00:32:30.260721956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lkl4b,Uid:88835561-0fd8-4963-bbc3-b0aaf46c9820,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0\"" Nov 8 00:32:30.279189 sshd[4765]: Accepted publickey for core from 10.0.0.1 port 42578 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:30.281515 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:30.285942 systemd-logind[1456]: New session 9 of user core. Nov 8 00:32:30.296207 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:32:30.490003 sshd[4765]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:30.495885 systemd-networkd[1407]: cali37886e22187: Link UP Nov 8 00:32:30.496285 systemd-networkd[1407]: cali37886e22187: Gained carrier Nov 8 00:32:30.498191 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:42578.service: Deactivated successfully. Nov 8 00:32:30.501392 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:32:30.502344 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:32:30.504583 systemd-logind[1456]: Removed session 9. Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:29.947 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0 calico-kube-controllers-7689cf9c54- calico-system 5d526354-b399-458e-b2b3-be2f314ae23a 1070 0 2025-11-08 00:32:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7689cf9c54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7689cf9c54-vlx96 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37886e22187 [] [] }} ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:29.948 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.012 [INFO][4640] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" HandleID="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.012 [INFO][4640] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" HandleID="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7689cf9c54-vlx96", "timestamp":"2025-11-08 00:32:30.012100005 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.012 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.142 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.142 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.216 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.223 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.228 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.230 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.232 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.232 [INFO][4640] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.234 [INFO][4640] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36 Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.238 [INFO][4640] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.481 [INFO][4640] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.481 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" host="localhost" Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.481 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:30.511796 containerd[1494]: 2025-11-08 00:32:30.481 [INFO][4640] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" HandleID="k8s-pod-network.fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.489 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0", GenerateName:"calico-kube-controllers-7689cf9c54-", Namespace:"calico-system", SelfLink:"", UID:"5d526354-b399-458e-b2b3-be2f314ae23a", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689cf9c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7689cf9c54-vlx96", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37886e22187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.489 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.490 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37886e22187 ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.497 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.497 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0", GenerateName:"calico-kube-controllers-7689cf9c54-", Namespace:"calico-system", SelfLink:"", UID:"5d526354-b399-458e-b2b3-be2f314ae23a", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689cf9c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36", Pod:"calico-kube-controllers-7689cf9c54-vlx96", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37886e22187", MAC:"26:ab:6b:a8:13:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.512631 containerd[1494]: 2025-11-08 00:32:30.508 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36" Namespace="calico-system" Pod="calico-kube-controllers-7689cf9c54-vlx96" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:30.539203 containerd[1494]: time="2025-11-08T00:32:30.539073372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:30.539203 containerd[1494]: time="2025-11-08T00:32:30.539155667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:30.539203 containerd[1494]: time="2025-11-08T00:32:30.539168942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.539548 containerd[1494]: time="2025-11-08T00:32:30.539485286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.559132 systemd[1]: Started cri-containerd-fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36.scope - libcontainer container fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36. Nov 8 00:32:30.574555 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:30.602604 containerd[1494]: time="2025-11-08T00:32:30.602557505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7689cf9c54-vlx96,Uid:5d526354-b399-458e-b2b3-be2f314ae23a,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36\"" Nov 8 00:32:30.614258 containerd[1494]: time="2025-11-08T00:32:30.614211868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.615395 containerd[1494]: time="2025-11-08T00:32:30.615344977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:30.615559 containerd[1494]: time="2025-11-08T00:32:30.615394520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:30.615619 kubelet[2505]: E1108 00:32:30.615556 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:30.615741 kubelet[2505]: E1108 00:32:30.615622 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:30.615997 kubelet[2505]: E1108 00:32:30.615903 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqhfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lzqlc_calico-system(417d4903-c711-42c7-9ef7-788a2e600314): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.616212 containerd[1494]: time="2025-11-08T00:32:30.616023952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:30.617166 kubelet[2505]: E1108 00:32:30.617125 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:30.650552 containerd[1494]: time="2025-11-08T00:32:30.650406870Z" level=info msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.692 [INFO][4847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.693 [INFO][4847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" iface="eth0" netns="/var/run/netns/cni-b1b2a685-279b-8767-e553-cb0f0a4fd321" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.693 [INFO][4847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" iface="eth0" netns="/var/run/netns/cni-b1b2a685-279b-8767-e553-cb0f0a4fd321" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.693 [INFO][4847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" iface="eth0" netns="/var/run/netns/cni-b1b2a685-279b-8767-e553-cb0f0a4fd321" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.693 [INFO][4847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.694 [INFO][4847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.715 [INFO][4856] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.715 [INFO][4856] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.715 [INFO][4856] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.721 [WARNING][4856] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.721 [INFO][4856] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.723 [INFO][4856] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:30.729251 containerd[1494]: 2025-11-08 00:32:30.725 [INFO][4847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:30.730029 containerd[1494]: time="2025-11-08T00:32:30.729410606Z" level=info msg="TearDown network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" successfully" Nov 8 00:32:30.730029 containerd[1494]: time="2025-11-08T00:32:30.729436084Z" level=info msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" returns successfully" Nov 8 00:32:30.730173 containerd[1494]: time="2025-11-08T00:32:30.730126992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-592vx,Uid:b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:32:30.763813 systemd[1]: run-netns-cni\x2db1b2a685\x2d279b\x2d8767\x2de553\x2dcb0f0a4fd321.mount: Deactivated successfully. Nov 8 00:32:30.862861 systemd-networkd[1407]: cali2ab280b94f4: Link UP Nov 8 00:32:30.865507 systemd-networkd[1407]: cali2ab280b94f4: Gained carrier Nov 8 00:32:30.885578 systemd-networkd[1407]: califcc1381bdd7: Gained IPv6LL Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.788 [INFO][4865] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0 calico-apiserver-6bdc4f9f54- calico-apiserver b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9 1105 0 2025-11-08 00:32:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bdc4f9f54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6bdc4f9f54-592vx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ab280b94f4 [] [] }} ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.788 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.822 [INFO][4879] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" HandleID="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.823 [INFO][4879] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" HandleID="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c75c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6bdc4f9f54-592vx", "timestamp":"2025-11-08 00:32:30.822980685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.823 [INFO][4879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.823 [INFO][4879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.823 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.829 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.834 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.839 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.840 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.843 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.843 [INFO][4879] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.844 [INFO][4879] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5 Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.848 [INFO][4879] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.854 [INFO][4879] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.855 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" host="localhost" Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.855 [INFO][4879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:30.898083 containerd[1494]: 2025-11-08 00:32:30.855 [INFO][4879] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" HandleID="k8s-pod-network.fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.859 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6bdc4f9f54-592vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ab280b94f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.859 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.859 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ab280b94f4 ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.868 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.872 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5", Pod:"calico-apiserver-6bdc4f9f54-592vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ab280b94f4", MAC:"ea:ba:b8:fa:48:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:30.898600 containerd[1494]: 2025-11-08 00:32:30.887 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5" Namespace="calico-apiserver" Pod="calico-apiserver-6bdc4f9f54-592vx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:30.904433 kubelet[2505]: E1108 00:32:30.904393 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:30.905257 kubelet[2505]: E1108 00:32:30.904519 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:30.907170 kubelet[2505]: E1108 00:32:30.907135 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:30.939211 containerd[1494]: time="2025-11-08T00:32:30.939086447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:32:30.939211 containerd[1494]: time="2025-11-08T00:32:30.939157891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:32:30.939585 containerd[1494]: time="2025-11-08T00:32:30.939180353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.939585 containerd[1494]: time="2025-11-08T00:32:30.939296201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:32:30.962116 containerd[1494]: time="2025-11-08T00:32:30.962062897Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.969985 containerd[1494]: time="2025-11-08T00:32:30.968082625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:30.969985 containerd[1494]: time="2025-11-08T00:32:30.968173556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:30.970129 kubelet[2505]: E1108 00:32:30.968347 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:30.970129 kubelet[2505]: E1108 00:32:30.968392 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:30.970129 kubelet[2505]: E1108 00:32:30.968639 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.973308 containerd[1494]: time="2025-11-08T00:32:30.973272324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:30.976547 systemd[1]: Started cri-containerd-fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5.scope - libcontainer container fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5. Nov 8 00:32:31.007439 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:32:31.046750 containerd[1494]: time="2025-11-08T00:32:31.046610808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bdc4f9f54-592vx,Uid:b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5\"" Nov 8 00:32:31.353421 containerd[1494]: time="2025-11-08T00:32:31.353377105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:31.354832 containerd[1494]: time="2025-11-08T00:32:31.354768368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:31.355045 containerd[1494]: time="2025-11-08T00:32:31.354816349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:31.355127 kubelet[2505]: E1108 00:32:31.355058 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:31.355200 kubelet[2505]: E1108 00:32:31.355127 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:31.355463 kubelet[2505]: E1108 00:32:31.355397 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwdxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7689cf9c54-vlx96_calico-system(5d526354-b399-458e-b2b3-be2f314ae23a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:31.355587 containerd[1494]: time="2025-11-08T00:32:31.355464306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:31.357464 kubelet[2505]: E1108 00:32:31.357419 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:32:31.396139 systemd-networkd[1407]: calic58c8ca9cbb: Gained IPv6LL Nov 8 00:32:31.736332 containerd[1494]: time="2025-11-08T00:32:31.736166772Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:31.737481 containerd[1494]: time="2025-11-08T00:32:31.737432910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:31.737608 containerd[1494]: time="2025-11-08T00:32:31.737487362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:31.737840 kubelet[2505]: E1108 00:32:31.737774 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:31.737840 kubelet[2505]: E1108 00:32:31.737834 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:31.738242 kubelet[2505]: E1108 00:32:31.738151 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:31.738392 containerd[1494]: time="2025-11-08T00:32:31.738218746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:31.739805 kubelet[2505]: E1108 00:32:31.739745 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:31.908538 kubelet[2505]: E1108 00:32:31.908287 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:31.909082 kubelet[2505]: E1108 00:32:31.909012 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:31.909082 kubelet[2505]: E1108 00:32:31.909071 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:32:31.909817 kubelet[2505]: E1108 00:32:31.909118 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:31.909720 systemd-networkd[1407]: cali37886e22187: Gained IPv6LL Nov 8 00:32:32.079115 containerd[1494]: time="2025-11-08T00:32:32.079061250Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:32.080521 containerd[1494]: time="2025-11-08T00:32:32.080461540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:32.080651 containerd[1494]: time="2025-11-08T00:32:32.080502757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:32.080745 kubelet[2505]: E1108 00:32:32.080704 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:32.080803 kubelet[2505]: E1108 00:32:32.080759 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:32.080969 kubelet[2505]: E1108 00:32:32.080893 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5snz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-592vx_calico-apiserver(b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:32.082343 kubelet[2505]: E1108 00:32:32.082312 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:32.164226 systemd-networkd[1407]: cali5e2e5c4031a: Gained IPv6LL Nov 8 00:32:32.548341 systemd-networkd[1407]: cali2ab280b94f4: Gained IPv6LL Nov 8 00:32:32.911599 kubelet[2505]: E1108 00:32:32.911549 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:35.503588 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:54040.service - OpenSSH per-connection server daemon (10.0.0.1:54040). Nov 8 00:32:35.547390 sshd[4949]: Accepted publickey for core from 10.0.0.1 port 54040 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:35.549312 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:35.554197 systemd-logind[1456]: New session 10 of user core. Nov 8 00:32:35.565162 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:32:35.673441 sshd[4949]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:35.677697 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:54040.service: Deactivated successfully. Nov 8 00:32:35.679858 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:32:35.680452 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:32:35.681368 systemd-logind[1456]: Removed session 10. Nov 8 00:32:38.651479 containerd[1494]: time="2025-11-08T00:32:38.650874910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:39.167161 containerd[1494]: time="2025-11-08T00:32:39.167100702Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:39.168196 containerd[1494]: time="2025-11-08T00:32:39.168149861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:39.168288 containerd[1494]: time="2025-11-08T00:32:39.168199945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:39.168460 kubelet[2505]: E1108 00:32:39.168393 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:39.168885 kubelet[2505]: E1108 00:32:39.168466 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:39.168885 kubelet[2505]: E1108 00:32:39.168593 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c461a51f27f46ffb1d37efc97264654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:39.170834 containerd[1494]: time="2025-11-08T00:32:39.170795106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:39.540470 containerd[1494]: time="2025-11-08T00:32:39.540327558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:39.541555 containerd[1494]: time="2025-11-08T00:32:39.541503384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:39.541614 containerd[1494]: time="2025-11-08T00:32:39.541547257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:39.541805 kubelet[2505]: E1108 00:32:39.541745 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:39.541872 kubelet[2505]: E1108 00:32:39.541810 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:39.542021 kubelet[2505]: E1108 00:32:39.541966 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:39.543186 kubelet[2505]: E1108 00:32:39.543147 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26" Nov 8 00:32:40.687690 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:54042.service - OpenSSH per-connection server daemon (10.0.0.1:54042). Nov 8 00:32:40.733364 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 54042 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:40.735435 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:40.739599 systemd-logind[1456]: New session 11 of user core. Nov 8 00:32:40.745183 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:32:40.885803 sshd[4974]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:40.894332 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:54042.service: Deactivated successfully. Nov 8 00:32:40.896771 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:32:40.898618 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:32:40.904337 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Nov 8 00:32:40.905257 systemd-logind[1456]: Removed session 11. Nov 8 00:32:40.946355 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:40.948234 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:40.957197 systemd-logind[1456]: New session 12 of user core. Nov 8 00:32:40.968185 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:32:41.109444 sshd[4989]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:41.118564 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:54050.service: Deactivated successfully. Nov 8 00:32:41.121574 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:32:41.122922 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:32:41.125631 systemd-logind[1456]: Removed session 12. Nov 8 00:32:41.134361 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:54060.service - OpenSSH per-connection server daemon (10.0.0.1:54060). Nov 8 00:32:41.167575 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 54060 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:41.169240 sshd[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:41.173687 systemd-logind[1456]: New session 13 of user core. Nov 8 00:32:41.181198 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:32:41.295249 sshd[5003]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:41.299868 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:54060.service: Deactivated successfully. Nov 8 00:32:41.302066 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:32:41.302719 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:32:41.303555 systemd-logind[1456]: Removed session 13. Nov 8 00:32:41.651116 containerd[1494]: time="2025-11-08T00:32:41.651050502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:41.985721 containerd[1494]: time="2025-11-08T00:32:41.985573235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:41.987077 containerd[1494]: time="2025-11-08T00:32:41.987047311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:41.987184 containerd[1494]: time="2025-11-08T00:32:41.987090643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:41.987238 kubelet[2505]: E1108 00:32:41.987202 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:41.987567 kubelet[2505]: E1108 00:32:41.987245 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:41.987567 kubelet[2505]: E1108 00:32:41.987369 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrd4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-9vq7q_calico-apiserver(51a57672-a43f-42d3-abfb-83cef5f71936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:41.988570 kubelet[2505]: E1108 00:32:41.988526 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:42.650992 containerd[1494]: time="2025-11-08T00:32:42.650763664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:43.029659 containerd[1494]: time="2025-11-08T00:32:43.029364956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:43.030608 containerd[1494]: time="2025-11-08T00:32:43.030546242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:43.030737 containerd[1494]: time="2025-11-08T00:32:43.030584444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:43.030799 kubelet[2505]: E1108 00:32:43.030762 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:43.031219 kubelet[2505]: E1108 00:32:43.030811 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:43.031219 kubelet[2505]: E1108 00:32:43.030931 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:43.032806 containerd[1494]: time="2025-11-08T00:32:43.032780034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:43.465196 containerd[1494]: time="2025-11-08T00:32:43.465133316Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:43.557907 containerd[1494]: time="2025-11-08T00:32:43.557842985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:43.558056 containerd[1494]: time="2025-11-08T00:32:43.557897527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:43.558157 kubelet[2505]: E1108 00:32:43.558104 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:43.558200 kubelet[2505]: E1108 00:32:43.558163 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:43.558344 kubelet[2505]: E1108 00:32:43.558294 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:43.559502 kubelet[2505]: E1108 00:32:43.559438 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:44.651069 containerd[1494]: time="2025-11-08T00:32:44.650743954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:45.024273 containerd[1494]: time="2025-11-08T00:32:45.024099649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:45.025400 containerd[1494]: time="2025-11-08T00:32:45.025353402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:45.025473 containerd[1494]: time="2025-11-08T00:32:45.025422351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:45.025591 kubelet[2505]: E1108 00:32:45.025539 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:45.025912 kubelet[2505]: E1108 00:32:45.025594 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:45.025912 kubelet[2505]: E1108 00:32:45.025739 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwdxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7689cf9c54-vlx96_calico-system(5d526354-b399-458e-b2b3-be2f314ae23a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:45.027193 kubelet[2505]: E1108 00:32:45.027149 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:32:45.650117 containerd[1494]: time="2025-11-08T00:32:45.650070962Z" level=info msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" Nov 8 00:32:45.653001 containerd[1494]: time="2025-11-08T00:32:45.652971514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.688 [WARNING][5028] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5", Pod:"calico-apiserver-6bdc4f9f54-592vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ab280b94f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.688 [INFO][5028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.689 [INFO][5028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" iface="eth0" netns="" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.689 [INFO][5028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.689 [INFO][5028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.710 [INFO][5038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.710 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.710 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.715 [WARNING][5038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.715 [INFO][5038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.717 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:45.724013 containerd[1494]: 2025-11-08 00:32:45.720 [INFO][5028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.724758 containerd[1494]: time="2025-11-08T00:32:45.724045271Z" level=info msg="TearDown network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" successfully" Nov 8 00:32:45.724758 containerd[1494]: time="2025-11-08T00:32:45.724070468Z" level=info msg="StopPodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" returns successfully" Nov 8 00:32:45.724758 containerd[1494]: time="2025-11-08T00:32:45.724637932Z" level=info msg="RemovePodSandbox for \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" Nov 8 00:32:45.726858 containerd[1494]: time="2025-11-08T00:32:45.726827040Z" level=info msg="Forcibly stopping sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\"" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.762 [WARNING][5055] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb9f247fe64fe08b2e3de5e6ab376dc2bd225c97a92ee87a23d8004d2bc36bf5", Pod:"calico-apiserver-6bdc4f9f54-592vx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ab280b94f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.762 [INFO][5055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.762 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" iface="eth0" netns="" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.762 [INFO][5055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.762 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.789 [INFO][5065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.789 [INFO][5065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.789 [INFO][5065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.794 [WARNING][5065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.794 [INFO][5065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" HandleID="k8s-pod-network.aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--592vx-eth0" Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.795 [INFO][5065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:45.802937 containerd[1494]: 2025-11-08 00:32:45.798 [INFO][5055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1" Nov 8 00:32:45.803479 containerd[1494]: time="2025-11-08T00:32:45.803063013Z" level=info msg="TearDown network for sandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" successfully" Nov 8 00:32:45.814732 containerd[1494]: time="2025-11-08T00:32:45.814693116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:45.814804 containerd[1494]: time="2025-11-08T00:32:45.814760572Z" level=info msg="RemovePodSandbox \"aac1db0566c11d03bd4b75f5a64577fa7881481d4e1e0c3132c336b2780fefa1\" returns successfully" Nov 8 00:32:45.815422 containerd[1494]: time="2025-11-08T00:32:45.815394452Z" level=info msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.854 [WARNING][5083] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h8h25-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08a2c12f-2341-4bf8-ac6e-959cce58e330", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313", Pod:"coredns-668d6bf9bc-h8h25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c4027348f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.854 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.854 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" iface="eth0" netns="" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.854 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.854 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.882 [INFO][5092] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.882 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.882 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.887 [WARNING][5092] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.887 [INFO][5092] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.890 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:45.896014 containerd[1494]: 2025-11-08 00:32:45.892 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.896014 containerd[1494]: time="2025-11-08T00:32:45.895295230Z" level=info msg="TearDown network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" successfully" Nov 8 00:32:45.896014 containerd[1494]: time="2025-11-08T00:32:45.895318865Z" level=info msg="StopPodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" returns successfully" Nov 8 00:32:45.896014 containerd[1494]: time="2025-11-08T00:32:45.895742861Z" level=info msg="RemovePodSandbox for \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" Nov 8 00:32:45.896014 containerd[1494]: time="2025-11-08T00:32:45.895763580Z" level=info msg="Forcibly stopping sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\"" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.927 [WARNING][5110] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--h8h25-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08a2c12f-2341-4bf8-ac6e-959cce58e330", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91e814e4d74d6a18047677d1f1d72866f3d01a5445cddf64095bbb6f15e7d313", Pod:"coredns-668d6bf9bc-h8h25", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46c4027348f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.927 [INFO][5110] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.927 [INFO][5110] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" iface="eth0" netns="" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.927 [INFO][5110] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.927 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.953 [INFO][5118] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.954 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.954 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.959 [WARNING][5118] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.959 [INFO][5118] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" HandleID="k8s-pod-network.db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Workload="localhost-k8s-coredns--668d6bf9bc--h8h25-eth0" Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.960 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:45.965841 containerd[1494]: 2025-11-08 00:32:45.963 [INFO][5110] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87" Nov 8 00:32:45.965841 containerd[1494]: time="2025-11-08T00:32:45.965783759Z" level=info msg="TearDown network for sandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" successfully" Nov 8 00:32:45.969772 containerd[1494]: time="2025-11-08T00:32:45.969743660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:45.969846 containerd[1494]: time="2025-11-08T00:32:45.969786591Z" level=info msg="RemovePodSandbox \"db05636749b9110e3522cadb73a0a3bdfef0a100079678ea4bfd6fd5f4f32c87\" returns successfully" Nov 8 00:32:45.970242 containerd[1494]: time="2025-11-08T00:32:45.970217338Z" level=info msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" Nov 8 00:32:46.011236 containerd[1494]: time="2025-11-08T00:32:46.011193963Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:46.012708 containerd[1494]: time="2025-11-08T00:32:46.012596805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:46.012708 containerd[1494]: time="2025-11-08T00:32:46.012663079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:46.012978 kubelet[2505]: E1108 00:32:46.012903 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:46.013076 kubelet[2505]: E1108 00:32:46.012990 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:46.013426 kubelet[2505]: E1108 00:32:46.013253 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5snz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-592vx_calico-apiserver(b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:46.013532 containerd[1494]: time="2025-11-08T00:32:46.013389812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:46.015056 kubelet[2505]: E1108 00:32:46.015019 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.000 [WARNING][5137] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a57672-a43f-42d3-abfb-83cef5f71936", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432", Pod:"calico-apiserver-6bdc4f9f54-9vq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9d8307f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.000 [INFO][5137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.001 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" iface="eth0" netns="" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.001 [INFO][5137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.001 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.024 [INFO][5146] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.025 [INFO][5146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.025 [INFO][5146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.030 [WARNING][5146] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.030 [INFO][5146] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.031 [INFO][5146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.037030 containerd[1494]: 2025-11-08 00:32:46.034 [INFO][5137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.037733 containerd[1494]: time="2025-11-08T00:32:46.037086293Z" level=info msg="TearDown network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" successfully" Nov 8 00:32:46.037733 containerd[1494]: time="2025-11-08T00:32:46.037115087Z" level=info msg="StopPodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" returns successfully" Nov 8 00:32:46.037733 containerd[1494]: time="2025-11-08T00:32:46.037592522Z" level=info msg="RemovePodSandbox for \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" Nov 8 00:32:46.037733 containerd[1494]: time="2025-11-08T00:32:46.037627628Z" level=info msg="Forcibly stopping sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\"" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.072 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0", GenerateName:"calico-apiserver-6bdc4f9f54-", Namespace:"calico-apiserver", SelfLink:"", UID:"51a57672-a43f-42d3-abfb-83cef5f71936", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bdc4f9f54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f93b25815aa980eef14e317183bf724b0e24c7fae9caaafca3227709d1db0432", Pod:"calico-apiserver-6bdc4f9f54-9vq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9d8307f110", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.072 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.072 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" iface="eth0" netns="" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.072 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.072 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.094 [INFO][5172] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.094 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.094 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.099 [WARNING][5172] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.099 [INFO][5172] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" HandleID="k8s-pod-network.300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Workload="localhost-k8s-calico--apiserver--6bdc4f9f54--9vq7q-eth0" Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.100 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.105825 containerd[1494]: 2025-11-08 00:32:46.103 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80" Nov 8 00:32:46.106362 containerd[1494]: time="2025-11-08T00:32:46.105877983Z" level=info msg="TearDown network for sandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" successfully" Nov 8 00:32:46.113094 containerd[1494]: time="2025-11-08T00:32:46.113070798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:46.113165 containerd[1494]: time="2025-11-08T00:32:46.113110623Z" level=info msg="RemovePodSandbox \"300f2ca32fc888b82b8c94a6d41c63a0d4bfceaee87f0672bae51776c170bb80\" returns successfully" Nov 8 00:32:46.113667 containerd[1494]: time="2025-11-08T00:32:46.113633404Z" level=info msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.143 [WARNING][5190] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lzqlc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"417d4903-c711-42c7-9ef7-788a2e600314", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982", Pod:"goldmane-666569f655-lzqlc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2e5c4031a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.144 [INFO][5190] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.144 [INFO][5190] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" iface="eth0" netns="" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.144 [INFO][5190] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.144 [INFO][5190] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.165 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.165 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.165 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.170 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.170 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.172 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.177908 containerd[1494]: 2025-11-08 00:32:46.175 [INFO][5190] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.178380 containerd[1494]: time="2025-11-08T00:32:46.177969783Z" level=info msg="TearDown network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" successfully" Nov 8 00:32:46.178380 containerd[1494]: time="2025-11-08T00:32:46.178000351Z" level=info msg="StopPodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" returns successfully" Nov 8 00:32:46.178449 containerd[1494]: time="2025-11-08T00:32:46.178420920Z" level=info msg="RemovePodSandbox for \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" Nov 8 00:32:46.178492 containerd[1494]: time="2025-11-08T00:32:46.178448051Z" level=info msg="Forcibly stopping sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\"" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.212 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lzqlc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"417d4903-c711-42c7-9ef7-788a2e600314", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"36185be15d56cfffb7909208dedd158702eb7a976086676707853024d4d46982", Pod:"goldmane-666569f655-lzqlc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5e2e5c4031a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.213 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.213 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" iface="eth0" netns="" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.213 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.213 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.233 [INFO][5225] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.233 [INFO][5225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.233 [INFO][5225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.239 [WARNING][5225] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.239 [INFO][5225] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" HandleID="k8s-pod-network.58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Workload="localhost-k8s-goldmane--666569f655--lzqlc-eth0" Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.240 [INFO][5225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.246545 containerd[1494]: 2025-11-08 00:32:46.243 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274" Nov 8 00:32:46.246545 containerd[1494]: time="2025-11-08T00:32:46.246518187Z" level=info msg="TearDown network for sandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" successfully" Nov 8 00:32:46.250923 containerd[1494]: time="2025-11-08T00:32:46.250893546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:46.250998 containerd[1494]: time="2025-11-08T00:32:46.250936407Z" level=info msg="RemovePodSandbox \"58f462507804df1df7b22550f8b049bc75f04a7f76adace6e038493cea488274\" returns successfully" Nov 8 00:32:46.251541 containerd[1494]: time="2025-11-08T00:32:46.251492730Z" level=info msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" Nov 8 00:32:46.309710 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.282 [WARNING][5243] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkl4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88835561-0fd8-4963-bbc3-b0aaf46c9820", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0", Pod:"csi-node-driver-lkl4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic58c8ca9cbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.283 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.283 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" iface="eth0" netns="" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.283 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.283 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.307 [INFO][5251] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.307 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.307 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.314 [WARNING][5251] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.314 [INFO][5251] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.316 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.322379 containerd[1494]: 2025-11-08 00:32:46.319 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.322886 containerd[1494]: time="2025-11-08T00:32:46.322427550Z" level=info msg="TearDown network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" successfully" Nov 8 00:32:46.322886 containerd[1494]: time="2025-11-08T00:32:46.322452888Z" level=info msg="StopPodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" returns successfully" Nov 8 00:32:46.323107 containerd[1494]: time="2025-11-08T00:32:46.323058565Z" level=info msg="RemovePodSandbox for \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" Nov 8 00:32:46.323107 containerd[1494]: time="2025-11-08T00:32:46.323097558Z" level=info msg="Forcibly stopping sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\"" Nov 8 00:32:46.366877 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:46.369378 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:46.376244 containerd[1494]: time="2025-11-08T00:32:46.376043988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:46.376460 systemd-logind[1456]: New session 14 of user core. Nov 8 00:32:46.378005 containerd[1494]: time="2025-11-08T00:32:46.377321234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:46.378005 containerd[1494]: time="2025-11-08T00:32:46.377435499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:46.378161 kubelet[2505]: E1108 00:32:46.377582 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:46.378161 kubelet[2505]: E1108 00:32:46.377640 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:46.378161 kubelet[2505]: E1108 00:32:46.377779 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqhfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lzqlc_calico-system(417d4903-c711-42c7-9ef7-788a2e600314): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:46.385383 kubelet[2505]: E1108 00:32:46.379044 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:32:46.385249 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.361 [WARNING][5270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lkl4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88835561-0fd8-4963-bbc3-b0aaf46c9820", ResourceVersion:"1242", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a0c536ea2d092dcb5b6f9d6769ff77158e05f6eda5e98d3b58c04f81f1d080a0", Pod:"csi-node-driver-lkl4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic58c8ca9cbb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.361 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.361 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" iface="eth0" netns="" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.362 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.362 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.385 [INFO][5280] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.386 [INFO][5280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.386 [INFO][5280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.394 [WARNING][5280] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.394 [INFO][5280] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" HandleID="k8s-pod-network.5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Workload="localhost-k8s-csi--node--driver--lkl4b-eth0" Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.395 [INFO][5280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.401680 containerd[1494]: 2025-11-08 00:32:46.398 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3" Nov 8 00:32:46.402259 containerd[1494]: time="2025-11-08T00:32:46.401688393Z" level=info msg="TearDown network for sandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" successfully" Nov 8 00:32:46.405588 containerd[1494]: time="2025-11-08T00:32:46.405536633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:46.405588 containerd[1494]: time="2025-11-08T00:32:46.405580866Z" level=info msg="RemovePodSandbox \"5e194dc3131e0bfd8002c7ceb3f854b01a7da7f2586d085cf1d3f93f02d62bf3\" returns successfully" Nov 8 00:32:46.406201 containerd[1494]: time="2025-11-08T00:32:46.406162928Z" level=info msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.443 [WARNING][5300] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" WorkloadEndpoint="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.443 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.443 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" iface="eth0" netns="" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.443 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.443 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.466 [INFO][5312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.467 [INFO][5312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.467 [INFO][5312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.476 [WARNING][5312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.476 [INFO][5312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.477 [INFO][5312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.484827 containerd[1494]: 2025-11-08 00:32:46.481 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.485701 containerd[1494]: time="2025-11-08T00:32:46.484875672Z" level=info msg="TearDown network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" successfully" Nov 8 00:32:46.485701 containerd[1494]: time="2025-11-08T00:32:46.484900869Z" level=info msg="StopPodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" returns successfully" Nov 8 00:32:46.485701 containerd[1494]: time="2025-11-08T00:32:46.485648041Z" level=info msg="RemovePodSandbox for \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" Nov 8 00:32:46.485701 containerd[1494]: time="2025-11-08T00:32:46.485695410Z" level=info msg="Forcibly stopping sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\"" Nov 8 00:32:46.533460 sshd[5259]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:46.539536 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:44304.service: Deactivated successfully. Nov 8 00:32:46.541631 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:32:46.542281 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:32:46.543294 systemd-logind[1456]: Removed session 14. Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.530 [WARNING][5335] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" WorkloadEndpoint="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.531 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.531 [INFO][5335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" iface="eth0" netns="" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.531 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.531 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.551 [INFO][5343] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.551 [INFO][5343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.552 [INFO][5343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.557 [WARNING][5343] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.557 [INFO][5343] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" HandleID="k8s-pod-network.94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Workload="localhost-k8s-whisker--8696ddb695--64p2t-eth0" Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.558 [INFO][5343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.564330 containerd[1494]: 2025-11-08 00:32:46.561 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61" Nov 8 00:32:46.564760 containerd[1494]: time="2025-11-08T00:32:46.564363229Z" level=info msg="TearDown network for sandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" successfully" Nov 8 00:32:46.572022 containerd[1494]: time="2025-11-08T00:32:46.571979950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:46.572022 containerd[1494]: time="2025-11-08T00:32:46.572024053Z" level=info msg="RemovePodSandbox \"94b736f43dc5521e681f809118d8d7f8b414f0987ca1e0c4abc3685d9f93ef61\" returns successfully" Nov 8 00:32:46.572517 containerd[1494]: time="2025-11-08T00:32:46.572494004Z" level=info msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.604 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58f4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"123d6cb1-1650-4283-829f-77b1235c57a8", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2", Pod:"coredns-668d6bf9bc-58f4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcc1381bdd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.604 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.604 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" iface="eth0" netns="" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.604 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.604 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.624 [INFO][5372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.624 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.624 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.629 [WARNING][5372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.629 [INFO][5372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.632 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.638100 containerd[1494]: 2025-11-08 00:32:46.635 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.638553 containerd[1494]: time="2025-11-08T00:32:46.638137758Z" level=info msg="TearDown network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" successfully" Nov 8 00:32:46.638553 containerd[1494]: time="2025-11-08T00:32:46.638163757Z" level=info msg="StopPodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" returns successfully" Nov 8 00:32:46.638712 containerd[1494]: time="2025-11-08T00:32:46.638680245Z" level=info msg="RemovePodSandbox for \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" Nov 8 00:32:46.638753 containerd[1494]: time="2025-11-08T00:32:46.638712546Z" level=info msg="Forcibly stopping sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\"" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.672 [WARNING][5389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58f4z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"123d6cb1-1650-4283-829f-77b1235c57a8", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32ec695dd79dfe3d73da3173cdf8cc03027acdd0c67faa8085e7649f5e4dd2", Pod:"coredns-668d6bf9bc-58f4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califcc1381bdd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.673 [INFO][5389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.673 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" iface="eth0" netns="" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.673 [INFO][5389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.673 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.692 [INFO][5398] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.692 [INFO][5398] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.692 [INFO][5398] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.700 [WARNING][5398] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.700 [INFO][5398] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" HandleID="k8s-pod-network.32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Workload="localhost-k8s-coredns--668d6bf9bc--58f4z-eth0" Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.702 [INFO][5398] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:46.708055 containerd[1494]: 2025-11-08 00:32:46.705 [INFO][5389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d" Nov 8 00:32:46.709034 containerd[1494]: time="2025-11-08T00:32:46.708104672Z" level=info msg="TearDown network for sandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" successfully" Nov 8 00:32:46.947214 containerd[1494]: time="2025-11-08T00:32:46.947153315Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:46.947214 containerd[1494]: time="2025-11-08T00:32:46.947206995Z" level=info msg="RemovePodSandbox \"32cab8f0953a6e661b562a19c8742daac59f8053caeddd4aabd6ab18e47a5e9d\" returns successfully" Nov 8 00:32:46.947574 containerd[1494]: time="2025-11-08T00:32:46.947533928Z" level=info msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:46.987 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0", GenerateName:"calico-kube-controllers-7689cf9c54-", Namespace:"calico-system", SelfLink:"", UID:"5d526354-b399-458e-b2b3-be2f314ae23a", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689cf9c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36", Pod:"calico-kube-controllers-7689cf9c54-vlx96", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37886e22187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:46.987 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:46.987 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" iface="eth0" netns="" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:46.987 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:46.987 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.011 [INFO][5431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.011 [INFO][5431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.012 [INFO][5431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.017 [WARNING][5431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.017 [INFO][5431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.020 [INFO][5431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:47.026772 containerd[1494]: 2025-11-08 00:32:47.024 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.027336 containerd[1494]: time="2025-11-08T00:32:47.026818564Z" level=info msg="TearDown network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" successfully" Nov 8 00:32:47.027336 containerd[1494]: time="2025-11-08T00:32:47.026843521Z" level=info msg="StopPodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" returns successfully" Nov 8 00:32:47.027392 containerd[1494]: time="2025-11-08T00:32:47.027327658Z" level=info msg="RemovePodSandbox for \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" Nov 8 00:32:47.027392 containerd[1494]: time="2025-11-08T00:32:47.027362874Z" level=info msg="Forcibly stopping sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\"" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.063 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0", GenerateName:"calico-kube-controllers-7689cf9c54-", Namespace:"calico-system", SelfLink:"", UID:"5d526354-b399-458e-b2b3-be2f314ae23a", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 32, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7689cf9c54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe28409e461d7804f09ece76309a03336c7dd89bc0d2ceae7d0a619b68b35a36", Pod:"calico-kube-controllers-7689cf9c54-vlx96", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37886e22187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.063 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.063 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" iface="eth0" netns="" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.063 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.063 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.084 [INFO][5459] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.084 [INFO][5459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.084 [INFO][5459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.091 [WARNING][5459] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.091 [INFO][5459] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" HandleID="k8s-pod-network.80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Workload="localhost-k8s-calico--kube--controllers--7689cf9c54--vlx96-eth0" Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.093 [INFO][5459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:32:47.099598 containerd[1494]: 2025-11-08 00:32:47.096 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39" Nov 8 00:32:47.099598 containerd[1494]: time="2025-11-08T00:32:47.099596626Z" level=info msg="TearDown network for sandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" successfully" Nov 8 00:32:47.298201 containerd[1494]: time="2025-11-08T00:32:47.298050047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:47.298201 containerd[1494]: time="2025-11-08T00:32:47.298132642Z" level=info msg="RemovePodSandbox \"80484c79e3ccf8463aa207e5ee51aa294b1c056aa5d7e5d06aa68653050e1d39\" returns successfully" Nov 8 00:32:51.545843 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:44310.service - OpenSSH per-connection server daemon (10.0.0.1:44310). Nov 8 00:32:51.590820 sshd[5474]: Accepted publickey for core from 10.0.0.1 port 44310 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:51.592746 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:51.597643 systemd-logind[1456]: New session 15 of user core. Nov 8 00:32:51.604157 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:32:51.720198 sshd[5474]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:51.724819 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:44310.service: Deactivated successfully. Nov 8 00:32:51.727131 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:32:51.727823 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:32:51.728748 systemd-logind[1456]: Removed session 15. Nov 8 00:32:54.650188 kubelet[2505]: E1108 00:32:54.650126 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:54.651118 kubelet[2505]: E1108 00:32:54.650717 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:32:54.651367 kubelet[2505]: E1108 00:32:54.651323 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26" Nov 8 00:32:56.736246 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:53940.service - OpenSSH per-connection server daemon (10.0.0.1:53940). Nov 8 00:32:56.778537 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 53940 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:32:56.780414 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:56.784570 systemd-logind[1456]: New session 16 of user core. Nov 8 00:32:56.792091 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:32:56.910812 sshd[5490]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:56.915264 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:53940.service: Deactivated successfully. Nov 8 00:32:56.917525 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:32:56.918263 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:32:56.919280 systemd-logind[1456]: Removed session 16. Nov 8 00:32:57.432087 kubelet[2505]: E1108 00:32:57.432048 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:32:57.650710 kubelet[2505]: E1108 00:32:57.650644 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:32:58.650892 kubelet[2505]: E1108 00:32:58.650816 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:32:59.650194 kubelet[2505]: E1108 00:32:59.650144 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:33:00.650947 kubelet[2505]: E1108 00:33:00.650881 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:33:01.650109 kubelet[2505]: E1108 00:33:01.650052 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:33:01.923729 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Nov 8 00:33:01.963831 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:01.965727 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:01.969815 systemd-logind[1456]: New session 17 of user core. Nov 8 00:33:01.980157 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:33:02.104489 sshd[5530]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:02.109990 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:53954.service: Deactivated successfully. Nov 8 00:33:02.112292 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:33:02.113171 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:33:02.114473 systemd-logind[1456]: Removed session 17. Nov 8 00:33:07.116885 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:49450.service - OpenSSH per-connection server daemon (10.0.0.1:49450). Nov 8 00:33:07.274260 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 49450 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:07.276003 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:07.280300 systemd-logind[1456]: New session 18 of user core. Nov 8 00:33:07.294088 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:33:07.420540 sshd[5552]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:07.429037 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:49450.service: Deactivated successfully. Nov 8 00:33:07.431041 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:33:07.432485 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:33:07.433893 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:49464.service - OpenSSH per-connection server daemon (10.0.0.1:49464). Nov 8 00:33:07.434758 systemd-logind[1456]: Removed session 18. Nov 8 00:33:07.472364 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 49464 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:07.475083 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:07.479389 systemd-logind[1456]: New session 19 of user core. Nov 8 00:33:07.485081 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:33:07.650449 kubelet[2505]: E1108 00:33:07.650392 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:33:07.809430 sshd[5566]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:07.820258 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:49464.service: Deactivated successfully. Nov 8 00:33:07.822336 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:33:07.824265 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:33:07.829224 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Nov 8 00:33:07.830419 systemd-logind[1456]: Removed session 19. Nov 8 00:33:07.867597 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:07.869439 sshd[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:07.874552 systemd-logind[1456]: New session 20 of user core. Nov 8 00:33:07.880104 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:33:08.467047 sshd[5578]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:08.479410 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:49470.service: Deactivated successfully. Nov 8 00:33:08.481703 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:33:08.487089 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:33:08.495285 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:49482.service - OpenSSH per-connection server daemon (10.0.0.1:49482). Nov 8 00:33:08.496803 systemd-logind[1456]: Removed session 20. Nov 8 00:33:08.533864 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 49482 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:08.535641 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:08.539695 systemd-logind[1456]: New session 21 of user core. Nov 8 00:33:08.546088 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:33:08.651791 containerd[1494]: time="2025-11-08T00:33:08.651747877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:33:09.031601 containerd[1494]: time="2025-11-08T00:33:09.031546171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:09.051296 containerd[1494]: time="2025-11-08T00:33:09.051229080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:33:09.051370 containerd[1494]: time="2025-11-08T00:33:09.051286470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:33:09.051490 kubelet[2505]: E1108 00:33:09.051437 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:09.051490 kubelet[2505]: E1108 00:33:09.051483 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:09.051928 kubelet[2505]: E1108 00:33:09.051602 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrd4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-9vq7q_calico-apiserver(51a57672-a43f-42d3-abfb-83cef5f71936): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:09.052794 kubelet[2505]: E1108 00:33:09.052759 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:33:09.093388 sshd[5600]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:09.105119 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:49482.service: Deactivated successfully. Nov 8 00:33:09.108584 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:33:09.110629 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:33:09.121326 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:49488.service - OpenSSH per-connection server daemon (10.0.0.1:49488). Nov 8 00:33:09.122481 systemd-logind[1456]: Removed session 21. Nov 8 00:33:09.155050 sshd[5612]: Accepted publickey for core from 10.0.0.1 port 49488 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:09.157085 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:09.163256 systemd-logind[1456]: New session 22 of user core. Nov 8 00:33:09.170228 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:33:09.291801 sshd[5612]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:09.296532 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:49488.service: Deactivated successfully. Nov 8 00:33:09.299215 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:33:09.300010 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:33:09.301120 systemd-logind[1456]: Removed session 22. Nov 8 00:33:09.650770 containerd[1494]: time="2025-11-08T00:33:09.650581041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:33:10.032498 containerd[1494]: time="2025-11-08T00:33:10.032330446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:10.033927 containerd[1494]: time="2025-11-08T00:33:10.033849457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:33:10.034003 containerd[1494]: time="2025-11-08T00:33:10.033927017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:33:10.034175 kubelet[2505]: E1108 00:33:10.034125 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:33:10.034225 kubelet[2505]: E1108 00:33:10.034182 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:33:10.034335 kubelet[2505]: E1108 00:33:10.034293 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c461a51f27f46ffb1d37efc97264654,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:10.036648 containerd[1494]: time="2025-11-08T00:33:10.036415787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:33:10.378055 containerd[1494]: time="2025-11-08T00:33:10.377993286Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:10.475116 containerd[1494]: time="2025-11-08T00:33:10.475014367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:33:10.475116 containerd[1494]: time="2025-11-08T00:33:10.475019637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:33:10.475282 kubelet[2505]: E1108 00:33:10.475240 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:33:10.475282 kubelet[2505]: E1108 00:33:10.475279 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:33:10.475929 kubelet[2505]: E1108 00:33:10.475382 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnszr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68ff55f559-tjknv_calico-system(576c7105-d7be-4c5c-87aa-116f53250b26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:10.476848 kubelet[2505]: E1108 00:33:10.476774 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26" Nov 8 00:33:10.650258 kubelet[2505]: E1108 00:33:10.650110 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:33:10.651316 containerd[1494]: time="2025-11-08T00:33:10.651235228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:33:11.126394 containerd[1494]: time="2025-11-08T00:33:11.126319069Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:11.127552 containerd[1494]: time="2025-11-08T00:33:11.127507075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:33:11.127667 containerd[1494]: time="2025-11-08T00:33:11.127585816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:33:11.127783 kubelet[2505]: E1108 00:33:11.127731 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:11.127872 kubelet[2505]: E1108 00:33:11.127819 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:33:11.128084 kubelet[2505]: E1108 00:33:11.128006 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5snz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bdc4f9f54-592vx_calico-apiserver(b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:11.129234 kubelet[2505]: E1108 00:33:11.129197 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:33:11.650458 containerd[1494]: time="2025-11-08T00:33:11.650282149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:33:12.080971 containerd[1494]: time="2025-11-08T00:33:12.080892887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:12.082256 containerd[1494]: time="2025-11-08T00:33:12.082183668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:33:12.082324 containerd[1494]: time="2025-11-08T00:33:12.082260685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:33:12.082508 kubelet[2505]: E1108 00:33:12.082459 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:33:12.082909 kubelet[2505]: E1108 00:33:12.082520 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:33:12.082909 kubelet[2505]: E1108 00:33:12.082672 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:12.084920 containerd[1494]: time="2025-11-08T00:33:12.084676260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:33:12.431210 containerd[1494]: time="2025-11-08T00:33:12.431069945Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:12.468351 containerd[1494]: time="2025-11-08T00:33:12.468295892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:33:12.468411 containerd[1494]: time="2025-11-08T00:33:12.468315971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:33:12.468605 kubelet[2505]: E1108 00:33:12.468551 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:33:12.470965 kubelet[2505]: E1108 00:33:12.468610 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:33:12.470965 kubelet[2505]: E1108 00:33:12.469151 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86mmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lkl4b_calico-system(88835561-0fd8-4963-bbc3-b0aaf46c9820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:12.472871 kubelet[2505]: E1108 00:33:12.472734 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lkl4b" podUID="88835561-0fd8-4963-bbc3-b0aaf46c9820" Nov 8 00:33:13.649377 kubelet[2505]: E1108 00:33:13.649340 2505 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:33:14.302969 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:49064.service - OpenSSH per-connection server daemon (10.0.0.1:49064). Nov 8 00:33:14.340667 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 49064 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:14.342424 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:14.346583 systemd-logind[1456]: New session 23 of user core. Nov 8 00:33:14.353104 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:33:14.464696 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:14.469058 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:49064.service: Deactivated successfully. Nov 8 00:33:14.471408 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:33:14.472230 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:33:14.473291 systemd-logind[1456]: Removed session 23. Nov 8 00:33:14.651484 containerd[1494]: time="2025-11-08T00:33:14.651447283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:33:15.010560 containerd[1494]: time="2025-11-08T00:33:15.010381154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:15.011627 containerd[1494]: time="2025-11-08T00:33:15.011564146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:33:15.011762 containerd[1494]: time="2025-11-08T00:33:15.011627617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:33:15.011899 kubelet[2505]: E1108 00:33:15.011730 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:33:15.011899 kubelet[2505]: E1108 00:33:15.011768 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:33:15.012361 kubelet[2505]: E1108 00:33:15.012005 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xwdxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7689cf9c54-vlx96_calico-system(5d526354-b399-458e-b2b3-be2f314ae23a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:15.012450 containerd[1494]: time="2025-11-08T00:33:15.012017784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:33:15.013594 kubelet[2505]: E1108 00:33:15.013568 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7689cf9c54-vlx96" podUID="5d526354-b399-458e-b2b3-be2f314ae23a" Nov 8 00:33:15.379535 containerd[1494]: time="2025-11-08T00:33:15.379483620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:33:15.380749 containerd[1494]: time="2025-11-08T00:33:15.380718460Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:33:15.380852 containerd[1494]: time="2025-11-08T00:33:15.380796960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:33:15.380973 kubelet[2505]: E1108 00:33:15.380920 2505 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:33:15.381030 kubelet[2505]: E1108 00:33:15.380995 2505 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:33:15.381177 kubelet[2505]: E1108 00:33:15.381133 2505 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqhfr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lzqlc_calico-system(417d4903-c711-42c7-9ef7-788a2e600314): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:33:15.382337 kubelet[2505]: E1108 00:33:15.382305 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lzqlc" podUID="417d4903-c711-42c7-9ef7-788a2e600314" Nov 8 00:33:19.483054 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Nov 8 00:33:19.522002 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:19.523698 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:19.528383 systemd-logind[1456]: New session 24 of user core. Nov 8 00:33:19.535169 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:33:19.638529 sshd[5646]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:19.642720 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:49072.service: Deactivated successfully. Nov 8 00:33:19.644938 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:33:19.645536 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:33:19.646430 systemd-logind[1456]: Removed session 24. Nov 8 00:33:21.007274 update_engine[1459]: I20251108 00:33:21.007189 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 8 00:33:21.007274 update_engine[1459]: I20251108 00:33:21.007261 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 8 00:33:21.008407 update_engine[1459]: I20251108 00:33:21.008374 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 8 00:33:21.008946 update_engine[1459]: I20251108 00:33:21.008914 1459 omaha_request_params.cc:62] Current group set to lts Nov 8 00:33:21.009085 update_engine[1459]: I20251108 00:33:21.009059 1459 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 8 00:33:21.009085 update_engine[1459]: I20251108 00:33:21.009073 1459 update_attempter.cc:643] Scheduling an action processor start. Nov 8 00:33:21.009481 update_engine[1459]: I20251108 00:33:21.009093 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:33:21.009481 update_engine[1459]: I20251108 00:33:21.009138 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 8 00:33:21.009481 update_engine[1459]: I20251108 00:33:21.009215 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:33:21.009481 update_engine[1459]: I20251108 00:33:21.009227 1459 omaha_request_action.cc:272] Request: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: Nov 8 00:33:21.009481 update_engine[1459]: I20251108 00:33:21.009238 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:33:21.016179 update_engine[1459]: I20251108 00:33:21.016134 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:33:21.016592 update_engine[1459]: I20251108 00:33:21.016499 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:33:21.017610 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 8 00:33:21.024717 update_engine[1459]: E20251108 00:33:21.024675 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:33:21.024779 update_engine[1459]: I20251108 00:33:21.024760 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 8 00:33:22.653987 kubelet[2505]: E1108 00:33:22.650923 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-9vq7q" podUID="51a57672-a43f-42d3-abfb-83cef5f71936" Nov 8 00:33:24.651772 systemd[1]: Started sshd@24-10.0.0.145:22-10.0.0.1:58596.service - OpenSSH per-connection server daemon (10.0.0.1:58596). Nov 8 00:33:24.654486 kubelet[2505]: E1108 00:33:24.652666 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bdc4f9f54-592vx" podUID="b48386d0-fbeb-4205-a9d9-bf52a9eeb9e9" Nov 8 00:33:24.721227 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 58596 ssh2: RSA SHA256:uaFmuisalBvQMmH5qZstKuvE4kKzEBfPZoE38x/oDZ0 Nov 8 00:33:24.723020 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:24.727322 systemd-logind[1456]: New session 25 of user core. Nov 8 00:33:24.735084 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:33:24.852138 sshd[5663]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:24.856183 systemd[1]: sshd@24-10.0.0.145:22-10.0.0.1:58596.service: Deactivated successfully. Nov 8 00:33:24.858490 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:33:24.859176 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:33:24.860065 systemd-logind[1456]: Removed session 25. Nov 8 00:33:25.651514 kubelet[2505]: E1108 00:33:25.651142 2505 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68ff55f559-tjknv" podUID="576c7105-d7be-4c5c-87aa-116f53250b26"