Nov 1 00:25:06.050655 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:25:06.050685 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:06.050701 kernel: BIOS-provided physical RAM map: Nov 1 00:25:06.050709 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:25:06.050718 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:25:06.050726 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:25:06.050737 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:25:06.050747 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:25:06.050755 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:25:06.050766 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:25:06.050775 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:25:06.050783 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:25:06.050797 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:25:06.050806 kernel: NX (Execute Disable) protection: active Nov 1 00:25:06.050817 kernel: APIC: Static calls initialized Nov 1 00:25:06.050834 kernel: SMBIOS 2.8 present. Nov 1 00:25:06.050844 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:25:06.050854 kernel: Hypervisor detected: KVM Nov 1 00:25:06.050864 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:25:06.050874 kernel: kvm-clock: using sched offset of 3843931905 cycles Nov 1 00:25:06.050884 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:25:06.050894 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:25:06.050904 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:25:06.050914 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:25:06.050925 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:25:06.050938 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:25:06.050948 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:25:06.050958 kernel: Using GB pages for direct mapping Nov 1 00:25:06.050968 kernel: ACPI: Early table checksum verification disabled Nov 1 00:25:06.050978 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:25:06.050988 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.050998 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051009 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051021 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:25:06.051031 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051042 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051052 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051062 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:25:06.051072 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:25:06.051082 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:25:06.051097 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:25:06.051110 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:25:06.051120 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:25:06.051131 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:25:06.051141 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:25:06.051152 kernel: No NUMA configuration found Nov 1 00:25:06.051162 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:25:06.051173 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:25:06.051186 kernel: Zone ranges: Nov 1 00:25:06.051197 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:25:06.051207 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:25:06.051218 kernel: Normal empty Nov 1 00:25:06.051228 kernel: Movable zone start for each node Nov 1 00:25:06.051239 kernel: Early memory node ranges Nov 1 00:25:06.051249 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:25:06.051259 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:25:06.051270 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:25:06.051283 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:25:06.051297 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:25:06.051308 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:25:06.051318 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:25:06.051329 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:25:06.051354 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:25:06.051365 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:25:06.051376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:25:06.051386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:25:06.051400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:25:06.051411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:25:06.051421 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:25:06.051431 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:25:06.051442 kernel: TSC deadline timer available Nov 1 00:25:06.051452 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:25:06.051462 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:25:06.051472 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:25:06.051486 kernel: kvm-guest: setup PV sched yield Nov 1 00:25:06.051500 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:25:06.051510 kernel: Booting paravirtualized kernel on KVM Nov 1 00:25:06.051521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:25:06.051532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:25:06.051542 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:25:06.051552 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:25:06.051562 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:25:06.051572 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:25:06.051582 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:25:06.051610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:06.051621 kernel: random: crng init done Nov 1 00:25:06.051632 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:25:06.051669 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:25:06.051679 kernel: Fallback order for Node 0: 0 Nov 1 00:25:06.051690 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:25:06.051701 kernel: Policy zone: DMA32 Nov 1 00:25:06.051711 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:25:06.051726 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 136904K reserved, 0K cma-reserved) Nov 1 00:25:06.051737 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:25:06.051748 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:25:06.051758 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:25:06.051769 kernel: Dynamic Preempt: voluntary Nov 1 00:25:06.051780 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:25:06.051798 kernel: rcu: RCU event tracing is enabled. Nov 1 00:25:06.051809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:25:06.051819 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:25:06.051834 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:25:06.051844 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:25:06.051855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:25:06.051866 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:25:06.051880 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:25:06.051891 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:25:06.051901 kernel: Console: colour VGA+ 80x25 Nov 1 00:25:06.051912 kernel: printk: console [ttyS0] enabled Nov 1 00:25:06.051922 kernel: ACPI: Core revision 20230628 Nov 1 00:25:06.051933 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:25:06.051947 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:25:06.051957 kernel: x2apic enabled Nov 1 00:25:06.051968 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:25:06.051978 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:25:06.051989 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:25:06.052000 kernel: kvm-guest: setup PV IPIs Nov 1 00:25:06.052011 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:25:06.052034 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:25:06.052045 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:25:06.052057 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:25:06.052068 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:25:06.052082 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:25:06.052093 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:25:06.052104 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:25:06.052115 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:25:06.052126 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:25:06.052140 kernel: active return thunk: retbleed_return_thunk Nov 1 00:25:06.052151 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:25:06.052163 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:25:06.052173 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:25:06.052183 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:25:06.052194 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:25:06.052203 kernel: active return thunk: srso_return_thunk Nov 1 00:25:06.052213 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:25:06.052227 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:25:06.052238 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:25:06.052248 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:25:06.052258 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:25:06.052269 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:25:06.052279 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:25:06.052290 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:25:06.052300 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:25:06.052310 kernel: landlock: Up and running. Nov 1 00:25:06.052324 kernel: SELinux: Initializing. Nov 1 00:25:06.052334 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:25:06.052446 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:25:06.052456 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:25:06.052466 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:06.052477 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:06.052487 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:25:06.052497 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:25:06.052511 kernel: ... version: 0 Nov 1 00:25:06.052526 kernel: ... bit width: 48 Nov 1 00:25:06.052536 kernel: ... generic registers: 6 Nov 1 00:25:06.052547 kernel: ... value mask: 0000ffffffffffff Nov 1 00:25:06.052556 kernel: ... max period: 00007fffffffffff Nov 1 00:25:06.052566 kernel: ... fixed-purpose events: 0 Nov 1 00:25:06.052576 kernel: ... event mask: 000000000000003f Nov 1 00:25:06.052586 kernel: signal: max sigframe size: 1776 Nov 1 00:25:06.052608 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:25:06.052618 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:25:06.052632 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:25:06.052641 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:25:06.052651 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:25:06.052661 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:25:06.052670 kernel: smpboot: Max logical packages: 1 Nov 1 00:25:06.052680 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:25:06.052690 kernel: devtmpfs: initialized Nov 1 00:25:06.052699 kernel: x86/mm: Memory block size: 128MB Nov 1 00:25:06.052709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:25:06.052719 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:25:06.052732 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:25:06.052742 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:25:06.052752 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:25:06.052762 kernel: audit: type=2000 audit(1761956704.227:1): state=initialized audit_enabled=0 res=1 Nov 1 00:25:06.052772 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:25:06.052781 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:25:06.052791 kernel: cpuidle: using governor menu Nov 1 00:25:06.052801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:25:06.052811 kernel: dca service started, version 1.12.1 Nov 1 00:25:06.052825 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:25:06.052835 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:25:06.052845 kernel: PCI: Using configuration type 1 for base access Nov 1 00:25:06.052855 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:25:06.052865 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:25:06.052875 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:25:06.052885 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:25:06.052895 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:25:06.052909 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:25:06.052919 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:25:06.052928 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:25:06.052938 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:25:06.052948 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:25:06.052958 kernel: ACPI: Interpreter enabled Nov 1 00:25:06.052968 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:25:06.052996 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:25:06.053009 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:25:06.053028 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:25:06.053056 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:25:06.053086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:25:06.053468 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:25:06.053676 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:25:06.053856 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:25:06.053873 kernel: PCI host bridge to bus 0000:00 Nov 1 00:25:06.054075 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:25:06.054258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:25:06.054445 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:25:06.054625 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:25:06.054798 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:25:06.054957 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:25:06.055085 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:25:06.055286 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:25:06.055572 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:25:06.055772 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:25:06.055950 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:25:06.056108 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:25:06.056277 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:25:06.056506 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:25:06.056706 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:25:06.056890 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:25:06.057161 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:25:06.057474 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:25:06.057709 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:25:06.057885 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:25:06.058052 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:25:06.058237 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:25:06.058418 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:25:06.058578 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:25:06.058742 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:25:06.058949 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:25:06.059198 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:25:06.059371 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:25:06.059546 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:25:06.059710 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:25:06.059856 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:25:06.060064 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:25:06.060211 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:25:06.060222 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:25:06.060236 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:25:06.060244 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:25:06.060253 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:25:06.060264 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:25:06.060275 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:25:06.060284 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:25:06.060294 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:25:06.060304 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:25:06.060313 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:25:06.060327 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:25:06.060454 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:25:06.060465 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:25:06.060475 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:25:06.060485 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:25:06.060495 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:25:06.060504 kernel: iommu: Default domain type: Translated Nov 1 00:25:06.060514 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:25:06.060524 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:25:06.060538 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:25:06.060549 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:25:06.060559 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:25:06.060730 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:25:06.060878 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:25:06.061035 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:25:06.061053 kernel: vgaarb: loaded Nov 1 00:25:06.061069 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:25:06.061080 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:25:06.061104 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:25:06.061121 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:25:06.061131 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:25:06.061142 kernel: pnp: PnP ACPI init Nov 1 00:25:06.061384 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:25:06.061405 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:25:06.061416 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:25:06.061427 kernel: NET: Registered PF_INET protocol family Nov 1 00:25:06.061445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:25:06.061457 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:25:06.061467 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:25:06.061478 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:25:06.061488 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:25:06.061499 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:25:06.061510 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:25:06.061521 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:25:06.061532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:25:06.061548 kernel: NET: Registered PF_XDP protocol family Nov 1 00:25:06.061734 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:25:06.061892 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:25:06.062046 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:25:06.062207 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:25:06.062396 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:25:06.062579 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:25:06.062608 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:25:06.062632 kernel: Initialise system trusted keyrings Nov 1 00:25:06.062644 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:25:06.062656 kernel: Key type asymmetric registered Nov 1 00:25:06.062673 kernel: Asymmetric key parser 'x509' registered Nov 1 00:25:06.062689 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:25:06.062698 kernel: io scheduler mq-deadline registered Nov 1 00:25:06.062709 kernel: io scheduler kyber registered Nov 1 00:25:06.062726 kernel: io scheduler bfq registered Nov 1 00:25:06.062743 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:25:06.062761 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:25:06.062778 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:25:06.062791 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:25:06.062800 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:25:06.062815 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:25:06.062835 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:25:06.062856 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:25:06.062870 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:25:06.063098 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:25:06.063127 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:25:06.063357 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:25:06.063572 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:25:05 UTC (1761956705) Nov 1 00:25:06.063764 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:25:06.063782 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:25:06.063793 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:25:06.063803 kernel: Segment Routing with IPv6 Nov 1 00:25:06.063813 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:25:06.063830 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:25:06.063840 kernel: Key type dns_resolver registered Nov 1 00:25:06.063850 kernel: IPI shorthand broadcast: enabled Nov 1 00:25:06.063861 kernel: sched_clock: Marking stable (1066006974, 223883444)->(1437809937, -147919519) Nov 1 00:25:06.063871 kernel: registered taskstats version 1 Nov 1 00:25:06.063882 kernel: Loading compiled-in X.509 certificates Nov 1 00:25:06.063892 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:25:06.063902 kernel: Key type .fscrypt registered Nov 1 00:25:06.063912 kernel: Key type fscrypt-provisioning registered Nov 1 00:25:06.063927 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:25:06.063938 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:25:06.063948 kernel: ima: No architecture policies found Nov 1 00:25:06.063959 kernel: clk: Disabling unused clocks Nov 1 00:25:06.063969 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:25:06.063978 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:25:06.063986 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:25:06.063994 kernel: Run /init as init process Nov 1 00:25:06.064001 kernel: with arguments: Nov 1 00:25:06.064012 kernel: /init Nov 1 00:25:06.064020 kernel: with environment: Nov 1 00:25:06.064028 kernel: HOME=/ Nov 1 00:25:06.064035 kernel: TERM=linux Nov 1 00:25:06.064045 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:25:06.064056 systemd[1]: Detected virtualization kvm. Nov 1 00:25:06.064064 systemd[1]: Detected architecture x86-64. Nov 1 00:25:06.064072 systemd[1]: Running in initrd. Nov 1 00:25:06.064083 systemd[1]: No hostname configured, using default hostname. Nov 1 00:25:06.064091 systemd[1]: Hostname set to . Nov 1 00:25:06.064100 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:25:06.064108 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:25:06.064117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:06.064125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:06.064134 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:25:06.064143 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:25:06.064154 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:25:06.064176 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:25:06.064189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:25:06.064198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:25:06.064209 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:06.064217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:06.064226 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:25:06.064234 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:25:06.064243 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:25:06.064251 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:25:06.064259 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:25:06.064268 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:25:06.064276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:25:06.064288 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:25:06.064297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:06.064305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:06.064314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:06.064325 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:25:06.064459 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:25:06.064475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:25:06.064487 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:25:06.064504 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:25:06.064519 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:25:06.064531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:25:06.064542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:06.064554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:25:06.064566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:06.064622 systemd-journald[193]: Collecting audit messages is disabled. Nov 1 00:25:06.064656 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:25:06.064676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:25:06.064688 systemd-journald[193]: Journal started Nov 1 00:25:06.064713 systemd-journald[193]: Runtime Journal (/run/log/journal/4b8c94521b134bb485c10305bdd2b714) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:25:06.057577 systemd-modules-load[194]: Inserted module 'overlay' Nov 1 00:25:06.150481 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:25:06.150519 kernel: Bridge firewalling registered Nov 1 00:25:06.088901 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 1 00:25:06.158490 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:25:06.159154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:06.167804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:06.172005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:06.203701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:06.209262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:25:06.222601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:25:06.227930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:25:06.234000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:06.240399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:06.243426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:06.258746 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:25:06.265834 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:06.284363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:25:06.295862 dracut-cmdline[225]: dracut-dracut-053 Nov 1 00:25:06.302855 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:25:06.551030 systemd-resolved[233]: Positive Trust Anchors: Nov 1 00:25:06.551059 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:25:06.551091 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:25:06.554208 systemd-resolved[233]: Defaulting to hostname 'linux'. Nov 1 00:25:06.555948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:25:06.576692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:06.602757 kernel: SCSI subsystem initialized Nov 1 00:25:06.614376 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:25:06.626392 kernel: iscsi: registered transport (tcp) Nov 1 00:25:06.648392 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:25:06.648480 kernel: QLogic iSCSI HBA Driver Nov 1 00:25:06.715708 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:25:06.730658 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:25:06.775386 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:25:06.775460 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:25:06.777364 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:25:06.823399 kernel: raid6: avx2x4 gen() 25580 MB/s Nov 1 00:25:06.840387 kernel: raid6: avx2x2 gen() 30325 MB/s Nov 1 00:25:06.858231 kernel: raid6: avx2x1 gen() 25171 MB/s Nov 1 00:25:06.858304 kernel: raid6: using algorithm avx2x2 gen() 30325 MB/s Nov 1 00:25:06.889066 kernel: raid6: .... xor() 19366 MB/s, rmw enabled Nov 1 00:25:06.889156 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:25:06.911380 kernel: xor: automatically using best checksumming function avx Nov 1 00:25:07.073372 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:25:07.087150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:25:07.105608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:07.118621 systemd-udevd[414]: Using default interface naming scheme 'v255'. Nov 1 00:25:07.123709 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:07.133532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:25:07.150259 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Nov 1 00:25:07.185883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:25:07.219663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:25:07.296099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:07.307628 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:25:07.326009 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:25:07.332231 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:25:07.337333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:07.338096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:25:07.348766 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:25:07.362414 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:25:07.364724 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:25:07.370949 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:25:07.380635 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:25:07.391203 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:25:07.391271 kernel: AES CTR mode by8 optimization enabled Nov 1 00:25:07.397825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:25:07.397875 kernel: GPT:9289727 != 19775487 Nov 1 00:25:07.397889 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:25:07.397912 kernel: GPT:9289727 != 19775487 Nov 1 00:25:07.397924 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:25:07.397937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:07.397515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:25:07.397680 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:07.405124 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:07.409204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:25:07.409455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:07.416164 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:07.427267 kernel: libata version 3.00 loaded. Nov 1 00:25:07.428719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:07.438496 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:25:07.438752 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:25:07.444288 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:25:07.444548 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:25:07.458509 kernel: scsi host0: ahci Nov 1 00:25:07.477378 kernel: scsi host1: ahci Nov 1 00:25:07.477666 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Nov 1 00:25:07.561449 kernel: scsi host2: ahci Nov 1 00:25:07.562378 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Nov 1 00:25:07.564366 kernel: scsi host3: ahci Nov 1 00:25:07.567375 kernel: scsi host4: ahci Nov 1 00:25:07.567503 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:25:07.650096 kernel: scsi host5: ahci Nov 1 00:25:07.650363 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:25:07.650377 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:25:07.650388 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:25:07.650412 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:25:07.650422 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:25:07.652251 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:25:07.644833 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:25:07.650368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:07.657768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:25:07.669196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:25:07.677700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:25:07.693688 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:25:07.697176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:25:07.704214 disk-uuid[565]: Primary Header is updated. Nov 1 00:25:07.704214 disk-uuid[565]: Secondary Entries is updated. Nov 1 00:25:07.704214 disk-uuid[565]: Secondary Header is updated. Nov 1 00:25:07.710069 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:07.718383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:07.725888 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:07.879359 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:25:07.879463 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:25:07.879494 kernel: ata3.00: applying bridge limits Nov 1 00:25:07.879505 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:07.881367 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:07.881395 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:07.882369 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:07.885375 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:25:07.885399 kernel: ata3.00: configured for UDMA/100 Nov 1 00:25:07.887389 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:25:07.932956 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:25:07.933409 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:25:07.945378 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:25:08.718377 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:25:08.718804 disk-uuid[568]: The operation has completed successfully. Nov 1 00:25:08.756001 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:25:08.756180 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:25:08.804792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:25:08.809562 sh[591]: Success Nov 1 00:25:08.825377 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:25:08.862127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:25:08.887206 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:25:08.890369 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:25:08.948737 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:25:08.948827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:08.948839 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:25:08.950447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:25:08.951695 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:25:08.957986 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:25:08.960765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:25:08.967519 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:25:08.970103 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:25:08.981382 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:08.981459 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:08.981475 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:08.985367 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:08.997413 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:25:09.000697 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:09.010295 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:25:09.016532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:25:09.218759 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:25:09.233727 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:25:09.253999 ignition[679]: Ignition 2.19.0 Nov 1 00:25:09.254018 ignition[679]: Stage: fetch-offline Nov 1 00:25:09.254095 ignition[679]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:09.254112 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:09.254305 ignition[679]: parsed url from cmdline: "" Nov 1 00:25:09.254312 ignition[679]: no config URL provided Nov 1 00:25:09.254319 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:25:09.254353 ignition[679]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:25:09.265936 systemd-networkd[777]: lo: Link UP Nov 1 00:25:09.254403 ignition[679]: op(1): [started] loading QEMU firmware config module Nov 1 00:25:09.265941 systemd-networkd[777]: lo: Gained carrier Nov 1 00:25:09.254413 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:25:09.271804 ignition[679]: op(1): [finished] loading QEMU firmware config module Nov 1 00:25:09.276864 systemd-networkd[777]: Enumeration completed Nov 1 00:25:09.277097 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:25:09.279672 systemd[1]: Reached target network.target - Network. Nov 1 00:25:09.287116 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:09.287129 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:25:09.295576 systemd-networkd[777]: eth0: Link UP Nov 1 00:25:09.296246 systemd-networkd[777]: eth0: Gained carrier Nov 1 00:25:09.296267 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:09.314496 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:25:09.376970 ignition[679]: parsing config with SHA512: 930fe74ece92d7a321e9cb061ea5c5111dc50b5bedc679f0e516b44380dac5e3d8c69ac473d369d9cf72d082ae4bda91c3f3d138f6a51daf909ec6724e528a33 Nov 1 00:25:09.383957 unknown[679]: fetched base config from "system" Nov 1 00:25:09.383972 unknown[679]: fetched user config from "qemu" Nov 1 00:25:09.384700 ignition[679]: fetch-offline: fetch-offline passed Nov 1 00:25:09.388424 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:25:09.384850 ignition[679]: Ignition finished successfully Nov 1 00:25:09.390294 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:25:09.403648 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:25:09.457683 ignition[783]: Ignition 2.19.0 Nov 1 00:25:09.457701 ignition[783]: Stage: kargs Nov 1 00:25:09.457944 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:09.457958 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:09.498691 ignition[783]: kargs: kargs passed Nov 1 00:25:09.499755 ignition[783]: Ignition finished successfully Nov 1 00:25:09.504317 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:25:09.519665 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:25:09.645094 ignition[792]: Ignition 2.19.0 Nov 1 00:25:09.645107 ignition[792]: Stage: disks Nov 1 00:25:09.645320 ignition[792]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:09.645351 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:09.646635 ignition[792]: disks: disks passed Nov 1 00:25:09.646701 ignition[792]: Ignition finished successfully Nov 1 00:25:09.655432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:25:09.658923 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:25:09.662629 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:25:09.666564 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:25:09.667244 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:25:09.670319 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:25:09.685560 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:25:09.716041 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:25:10.195729 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:25:10.234429 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:25:10.397398 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:25:10.398608 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:25:10.400695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:25:10.416539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:25:10.453693 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:25:10.487278 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Nov 1 00:25:10.456715 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:25:10.498951 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:10.498976 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:10.498989 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:10.499001 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:10.456765 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:25:10.456792 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:25:10.489194 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:25:10.500813 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:25:10.507541 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:25:10.550716 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:25:10.576830 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:25:10.582612 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:25:10.589243 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:25:10.694819 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:25:10.703523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:25:10.707361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:25:10.718366 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:25:10.721395 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:10.738853 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:25:10.920915 ignition[928]: INFO : Ignition 2.19.0 Nov 1 00:25:10.920915 ignition[928]: INFO : Stage: mount Nov 1 00:25:10.939699 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:10.939699 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:10.944303 ignition[928]: INFO : mount: mount passed Nov 1 00:25:10.944303 ignition[928]: INFO : Ignition finished successfully Nov 1 00:25:10.947520 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:25:10.959466 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:25:11.268648 systemd-networkd[777]: eth0: Gained IPv6LL Nov 1 00:25:11.412696 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:25:11.422380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Nov 1 00:25:11.422455 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:25:11.425613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:25:11.425646 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:25:11.445365 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:25:11.447040 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:25:11.507038 ignition[954]: INFO : Ignition 2.19.0 Nov 1 00:25:11.507038 ignition[954]: INFO : Stage: files Nov 1 00:25:11.534920 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:11.534920 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:11.539100 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:25:11.542404 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:25:11.542404 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:25:11.550576 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:25:11.553165 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:25:11.553165 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:25:11.553165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:25:11.553165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:25:11.553165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:25:11.553165 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:25:11.551415 unknown[954]: wrote ssh authorized keys file for user: core Nov 1 00:25:11.602573 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:25:11.729309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:25:11.729309 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:25:11.788063 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:25:11.788063 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:25:11.788063 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:25:11.788063 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:11.805922 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:25:12.101924 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:25:13.017602 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:25:13.017602 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 1 00:25:13.025101 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:25:13.091703 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:25:13.098534 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:25:13.101546 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:25:13.101546 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:25:13.101546 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:25:13.101546 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:25:13.101546 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:25:13.101546 ignition[954]: INFO : files: files passed Nov 1 00:25:13.101546 ignition[954]: INFO : Ignition finished successfully Nov 1 00:25:13.122637 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:25:13.136706 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:25:13.143314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:25:13.149133 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:25:13.151319 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:25:13.177254 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:25:13.183803 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:13.183803 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:13.190589 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:25:13.197837 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:25:13.204034 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:25:13.218774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:25:13.251786 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:25:13.253812 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:25:13.258988 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:25:13.262894 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:25:13.266769 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:25:13.282652 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:25:13.298569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:25:13.318802 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:25:13.332263 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:13.336872 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:13.341471 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:25:13.345168 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:25:13.347166 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:25:13.352363 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:25:13.356183 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:25:13.359463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:25:13.363476 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:25:13.367827 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:25:13.372015 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:25:13.376322 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:25:13.381655 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:25:13.386158 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:25:13.390591 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:25:13.394022 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:25:13.395979 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:25:13.400468 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:13.404837 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:13.409648 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:25:13.411483 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:13.416743 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:25:13.418767 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:25:13.423259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:25:13.425395 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:25:13.430071 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:25:13.433644 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:25:13.435953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:13.441404 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:25:13.445116 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:25:13.448899 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:25:13.450632 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:25:13.454540 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:25:13.456263 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:25:13.460506 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:25:13.462803 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:25:13.467965 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:25:13.469922 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:25:13.485634 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:25:13.490626 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:25:13.494189 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:25:13.496318 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:13.500919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:25:13.501112 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:25:13.509325 ignition[1008]: INFO : Ignition 2.19.0 Nov 1 00:25:13.509325 ignition[1008]: INFO : Stage: umount Nov 1 00:25:13.512509 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:25:13.512509 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:25:13.512509 ignition[1008]: INFO : umount: umount passed Nov 1 00:25:13.512509 ignition[1008]: INFO : Ignition finished successfully Nov 1 00:25:13.515036 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:25:13.515201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:25:13.517571 systemd[1]: Stopped target network.target - Network. Nov 1 00:25:13.520918 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:25:13.521150 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:25:13.527051 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:25:13.527190 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:25:13.528270 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:25:13.528333 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:25:13.534615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:25:13.534727 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:25:13.536049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:25:13.543613 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:25:13.546219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:25:13.547159 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:25:13.547313 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:25:13.551760 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:25:13.551834 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:25:13.557269 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:25:13.557462 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:25:13.563967 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:25:13.564055 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:13.570651 systemd-networkd[777]: eth0: DHCPv6 lease lost Nov 1 00:25:13.571933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:25:13.572107 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:25:13.576854 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:25:13.577068 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:25:13.579123 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:25:13.579242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:13.595663 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:25:13.596843 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:25:13.596963 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:25:13.601116 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:25:13.601218 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:13.604842 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:25:13.604898 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:13.608915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:13.641035 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:25:13.643202 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:13.648924 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:25:13.651056 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:25:13.656125 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:25:13.656317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:13.662538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:25:13.662623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:13.668439 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:25:13.670298 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:25:13.674912 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:25:13.675040 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:25:13.680524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:25:13.680609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:25:13.699667 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:25:13.700826 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:25:13.700916 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:13.701926 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:25:13.701999 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:13.709081 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:25:13.709160 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:13.710105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:25:13.710191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:13.719470 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:25:13.719609 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:25:13.725044 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:25:13.738413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:25:13.769999 systemd[1]: Switching root. Nov 1 00:25:13.810781 systemd-journald[193]: Journal stopped Nov 1 00:25:15.558330 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 1 00:25:15.558547 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:25:15.558576 kernel: SELinux: policy capability open_perms=1 Nov 1 00:25:15.558592 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:25:15.558608 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:25:15.558623 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:25:15.558638 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:25:15.558661 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:25:15.558693 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:25:15.558710 kernel: audit: type=1403 audit(1761956714.573:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:25:15.558728 systemd[1]: Successfully loaded SELinux policy in 47.845ms. Nov 1 00:25:15.558757 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.402ms. Nov 1 00:25:15.558776 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:25:15.558793 systemd[1]: Detected virtualization kvm. Nov 1 00:25:15.558809 systemd[1]: Detected architecture x86-64. Nov 1 00:25:15.558839 systemd[1]: Detected first boot. Nov 1 00:25:15.558857 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:25:15.558888 zram_generator::config[1089]: No configuration found. Nov 1 00:25:15.558907 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:25:15.558924 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:25:15.558941 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:25:15.558959 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:25:15.558978 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:25:15.558995 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:25:15.559011 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:25:15.559036 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:25:15.559061 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:25:15.559077 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:25:15.559094 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:25:15.559110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:25:15.559127 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:25:15.559144 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:25:15.559161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:25:15.559178 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:25:15.559204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:25:15.559221 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:25:15.559247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:25:15.559264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:25:15.559280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:25:15.559296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:25:15.559314 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:25:15.559331 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:25:15.559618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:25:15.559664 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:25:15.559687 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:25:15.559704 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:25:15.559721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:25:15.559738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:25:15.559755 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:25:15.559771 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:25:15.559789 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:25:15.559817 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:25:15.559834 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:25:15.559852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:15.559868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:25:15.559885 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:25:15.559901 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:25:15.559918 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:25:15.559935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:15.559960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:25:15.559978 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:25:15.559994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:15.560010 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:25:15.560026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:15.560042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:25:15.560058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:15.560074 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:25:15.560103 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:25:15.560128 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:25:15.560144 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:25:15.560159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:25:15.560174 kernel: fuse: init (API version 7.39) Nov 1 00:25:15.560189 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:25:15.560205 kernel: loop: module loaded Nov 1 00:25:15.560249 systemd-journald[1167]: Collecting audit messages is disabled. Nov 1 00:25:15.560297 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:25:15.560314 systemd-journald[1167]: Journal started Nov 1 00:25:15.560371 systemd-journald[1167]: Runtime Journal (/run/log/journal/4b8c94521b134bb485c10305bdd2b714) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:25:15.572394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:25:15.572466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:15.595282 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:25:15.596982 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:25:15.599027 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:25:15.601144 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:25:15.603000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:25:15.605065 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:25:15.607182 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:25:15.609281 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:25:15.611830 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:25:15.612064 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:25:15.614515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:15.614799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:15.617249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:15.617507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:15.619381 kernel: ACPI: bus type drm_connector registered Nov 1 00:25:15.621246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:25:15.621571 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:25:15.624368 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:25:15.624652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:25:15.627032 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:15.627405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:15.629983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:25:15.632823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:25:15.635698 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:25:15.716450 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:25:15.745581 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:25:15.749616 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:25:15.751578 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:25:15.754164 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:25:15.757690 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:25:15.759767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:25:15.762083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:25:15.764368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:25:15.766268 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:25:15.771936 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:25:15.778379 systemd-journald[1167]: Time spent on flushing to /var/log/journal/4b8c94521b134bb485c10305bdd2b714 is 19.637ms for 938 entries. Nov 1 00:25:15.778379 systemd-journald[1167]: System Journal (/var/log/journal/4b8c94521b134bb485c10305bdd2b714) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:25:16.058224 systemd-journald[1167]: Received client request to flush runtime journal. Nov 1 00:25:15.778438 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:25:15.808131 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:25:15.811919 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:25:15.822600 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:25:15.895626 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:25:15.954002 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 1 00:25:15.954023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:25:15.954025 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Nov 1 00:25:15.975623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:25:16.027805 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:25:16.030121 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:25:16.047851 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:25:16.060562 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:25:16.063097 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:25:16.104854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:25:16.114592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:25:16.155274 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 1 00:25:16.155298 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 1 00:25:16.163362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:25:16.579703 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:25:16.593538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:25:16.627231 systemd-udevd[1254]: Using default interface naming scheme 'v255'. Nov 1 00:25:16.646967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:25:16.711706 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 00:25:16.726748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1270) Nov 1 00:25:16.724649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:25:16.757395 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:25:16.762377 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:25:16.784688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:25:16.799626 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:25:16.799958 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:25:16.796573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:25:16.804517 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:25:16.805363 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:25:16.831134 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:25:16.829772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:25:16.865408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:25:16.985383 kernel: kvm_amd: TSC scaling supported Nov 1 00:25:16.985555 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:25:16.985577 kernel: kvm_amd: Nested Paging enabled Nov 1 00:25:16.985599 kernel: kvm_amd: LBR virtualization supported Nov 1 00:25:16.985661 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:25:16.985678 kernel: kvm_amd: Virtual GIF supported Nov 1 00:25:17.014299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:25:17.027457 systemd-networkd[1274]: lo: Link UP Nov 1 00:25:17.027470 systemd-networkd[1274]: lo: Gained carrier Nov 1 00:25:17.032852 systemd-networkd[1274]: Enumeration completed Nov 1 00:25:17.033031 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:25:17.034185 systemd-networkd[1274]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:17.034191 systemd-networkd[1274]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:25:17.035812 systemd-networkd[1274]: eth0: Link UP Nov 1 00:25:17.035826 systemd-networkd[1274]: eth0: Gained carrier Nov 1 00:25:17.035844 systemd-networkd[1274]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:25:17.037373 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:25:17.049628 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:25:17.051435 systemd-networkd[1274]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:25:17.079164 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:25:17.099825 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:25:17.118978 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:25:17.159530 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:25:17.162247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:25:17.181750 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:25:17.191059 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:25:17.233374 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:25:17.235753 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:25:17.237841 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:25:17.237870 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:25:17.240035 systemd[1]: Reached target machines.target - Containers. Nov 1 00:25:17.243618 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:25:17.257671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:25:17.262283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:25:17.264219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:17.265491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:25:17.271033 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:25:17.275594 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:25:17.279933 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:25:17.296628 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:25:17.303918 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 00:25:17.317648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:25:17.318811 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:25:17.330372 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:25:17.365383 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:25:17.570363 kernel: loop2: detected capacity change from 0 to 224512 Nov 1 00:25:17.647369 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:25:17.664367 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:25:17.677389 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 00:25:17.684619 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:25:17.685464 (sd-merge)[1325]: Merged extensions into '/usr'. Nov 1 00:25:17.691780 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:25:17.691797 systemd[1]: Reloading... Nov 1 00:25:17.788404 zram_generator::config[1356]: No configuration found. Nov 1 00:25:17.984411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:18.028866 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:25:18.069937 systemd[1]: Reloading finished in 377 ms. Nov 1 00:25:18.097845 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:25:18.101229 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:25:18.117623 systemd-networkd[1274]: eth0: Gained IPv6LL Nov 1 00:25:18.136617 systemd[1]: Starting ensure-sysext.service... Nov 1 00:25:18.151773 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:25:18.154828 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:25:18.167009 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:25:18.167030 systemd[1]: Reloading... Nov 1 00:25:18.186232 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:25:18.186749 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:25:18.188187 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:25:18.188784 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Nov 1 00:25:18.188926 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Nov 1 00:25:18.194425 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:25:18.194449 systemd-tmpfiles[1399]: Skipping /boot Nov 1 00:25:18.215872 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:25:18.215897 systemd-tmpfiles[1399]: Skipping /boot Nov 1 00:25:18.243429 zram_generator::config[1428]: No configuration found. Nov 1 00:25:18.388901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:18.467802 systemd[1]: Reloading finished in 300 ms. Nov 1 00:25:18.487277 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:25:18.506315 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:25:18.510280 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:25:18.515651 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:25:18.522498 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:25:18.527511 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:25:18.538056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.538241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:18.547417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:18.552653 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:18.558839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:18.562687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:18.562802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.563847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:18.564092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:18.574682 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:25:18.580062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:18.580625 augenrules[1498]: No rules Nov 1 00:25:18.580305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:18.584038 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:25:18.586908 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:18.587531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:18.599839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.600506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:18.614157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:18.618423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:18.625130 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:18.627251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:18.632651 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:25:18.634766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.636577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:25:18.641326 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:25:18.644123 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:18.644623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:18.647216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:18.647503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:18.650330 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:18.652683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:18.655326 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:25:18.668080 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.668495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:25:18.675669 systemd-resolved[1478]: Positive Trust Anchors: Nov 1 00:25:18.675691 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:25:18.675733 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:25:18.678742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:25:18.681641 systemd-resolved[1478]: Defaulting to hostname 'linux'. Nov 1 00:25:18.682483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:25:18.686020 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:25:18.692721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:25:18.694102 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:25:18.694513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:25:18.694854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:25:18.697225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:25:18.700790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:25:18.701080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:25:18.703989 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:25:18.710626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:25:18.713193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:25:18.713502 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:25:18.718198 systemd[1]: Finished ensure-sysext.service. Nov 1 00:25:18.721117 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:25:18.721520 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:25:18.728592 systemd[1]: Reached target network.target - Network. Nov 1 00:25:18.730425 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:25:18.732322 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:25:18.734658 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:25:18.734750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:25:18.747502 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:25:18.858388 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:25:18.859754 systemd-timesyncd[1545]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:25:18.859813 systemd-timesyncd[1545]: Initial clock synchronization to Sat 2025-11-01 00:25:18.729212 UTC. Nov 1 00:25:18.861628 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:25:18.863682 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:25:18.865833 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:25:18.867967 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:25:18.870129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:25:18.870166 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:25:18.872022 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:25:18.874303 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:25:18.876501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:25:18.878642 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:25:18.880986 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:25:18.885513 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:25:18.888647 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:25:18.893707 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:25:18.895597 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:25:18.897311 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:25:18.899620 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:25:18.899671 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:25:18.899697 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:25:18.901507 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:25:18.904596 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:25:18.907560 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:25:18.912462 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:25:18.918113 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:25:18.919910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:25:18.921817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:18.922833 jq[1552]: false Nov 1 00:25:18.927493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:25:18.931579 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:25:18.942786 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:25:18.948505 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:25:18.947731 dbus-daemon[1551]: [system] SELinux support is enabled Nov 1 00:25:18.951680 extend-filesystems[1555]: Found loop3 Nov 1 00:25:18.953082 extend-filesystems[1555]: Found loop4 Nov 1 00:25:18.953082 extend-filesystems[1555]: Found loop5 Nov 1 00:25:18.953082 extend-filesystems[1555]: Found sr0 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda1 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda2 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda3 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found usr Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda4 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda6 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda7 Nov 1 00:25:18.959132 extend-filesystems[1555]: Found vda9 Nov 1 00:25:18.959132 extend-filesystems[1555]: Checking size of /dev/vda9 Nov 1 00:25:18.953576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:25:18.966064 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:25:18.973536 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:25:18.983543 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:25:18.997404 extend-filesystems[1555]: Resized partition /dev/vda9 Nov 1 00:25:19.020465 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1268) Nov 1 00:25:19.020512 extend-filesystems[1588]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:25:19.028540 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:25:19.032255 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:25:19.049369 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:25:19.052029 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:25:19.053459 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:25:19.057475 update_engine[1581]: I20251101 00:25:19.055373 1581 main.cc:92] Flatcar Update Engine starting Nov 1 00:25:19.057475 update_engine[1581]: I20251101 00:25:19.057008 1581 update_check_scheduler.cc:74] Next update check in 4m10s Nov 1 00:25:19.056741 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:25:19.057067 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:25:19.059490 jq[1590]: true Nov 1 00:25:19.063231 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:25:19.077861 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:25:19.078287 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:25:19.094649 jq[1599]: true Nov 1 00:25:19.092408 (ntainerd)[1600]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:25:19.093588 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:25:19.093964 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:25:19.115472 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:25:19.161950 systemd-logind[1574]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:25:19.161979 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:25:19.163112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:25:19.163248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:25:19.163278 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:25:19.165215 systemd-logind[1574]: New seat seat0. Nov 1 00:25:19.167743 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:25:19.167775 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:25:19.183073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:25:19.184210 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:25:19.186914 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:25:19.192275 tar[1597]: linux-amd64/LICENSE Nov 1 00:25:19.194573 tar[1597]: linux-amd64/helm Nov 1 00:25:19.259015 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:25:19.393467 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:25:19.403926 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:25:19.409538 extend-filesystems[1588]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:25:19.409538 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:25:19.409538 extend-filesystems[1588]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:25:19.425752 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Nov 1 00:25:19.416037 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:25:19.416639 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:25:19.429241 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:25:19.439381 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:25:19.442502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:25:19.446233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:25:19.485351 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:25:19.496742 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:25:19.500779 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:32914.service - OpenSSH per-connection server daemon (10.0.0.1:32914). Nov 1 00:25:19.509468 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:25:19.509910 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:25:19.523721 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:25:19.596836 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:25:19.649258 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:25:19.653577 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:25:19.656880 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:25:19.780614 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 32914 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:19.826445 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:19.839727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:25:19.860523 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:25:19.870578 systemd-logind[1574]: New session 1 of user core. Nov 1 00:25:19.918258 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:25:19.961238 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:25:19.967966 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:25:20.105411 containerd[1600]: time="2025-11-01T00:25:20.105248618Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.149119521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.152257515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.152302491Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.152327013Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.152656032Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.152678569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.153031874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.153056474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.153881516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.153900183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214037 containerd[1600]: time="2025-11-01T00:25:20.153932694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.153945285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154083234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154450681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154646769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154661543Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154797911Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.154899881Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.163798047Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.163868601Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.163885953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.163907029Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.163924648Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:25:20.214264 containerd[1600]: time="2025-11-01T00:25:20.164220158Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165120870Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165535198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165583254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165598977Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165625692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165667072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165687239Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165706171Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165728234Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165741468Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165940065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165959737Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.165996940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214511 containerd[1600]: time="2025-11-01T00:25:20.166020446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166033136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166045244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166060770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166074072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166088640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166103157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166116381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166136015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166149022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166168072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166182650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166256917Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166426833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166456649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.214757 containerd[1600]: time="2025-11-01T00:25:20.166477280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166588375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166611406Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166622675Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166866109Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166880745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166902195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.166941630Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:25:20.215206 containerd[1600]: time="2025-11-01T00:25:20.170611338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.171130550Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.171234100Z" level=info msg="Connect containerd service" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.171290314Z" level=info msg="using legacy CRI server" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.171301277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.171464408Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.172569948Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.173108753Z" level=info msg="Start subscribing containerd event" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.173367969Z" level=info msg="Start recovering state" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.174010384Z" level=info msg="Start event monitor" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.174043261Z" level=info msg="Start snapshots syncer" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.174068030Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.174087545Z" level=info msg="Start streaming server" Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.175002636Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.175075057Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:25:20.215379 containerd[1600]: time="2025-11-01T00:25:20.175146974Z" level=info msg="containerd successfully booted in 0.071733s" Nov 1 00:25:20.216198 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:25:20.353795 systemd[1674]: Queued start job for default target default.target. Nov 1 00:25:20.354287 systemd[1674]: Created slice app.slice - User Application Slice. Nov 1 00:25:20.354350 systemd[1674]: Reached target paths.target - Paths. Nov 1 00:25:20.354367 systemd[1674]: Reached target timers.target - Timers. Nov 1 00:25:20.460744 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:25:20.470205 tar[1597]: linux-amd64/README.md Nov 1 00:25:20.471527 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:25:20.471619 systemd[1674]: Reached target sockets.target - Sockets. Nov 1 00:25:20.471635 systemd[1674]: Reached target basic.target - Basic System. Nov 1 00:25:20.471678 systemd[1674]: Reached target default.target - Main User Target. Nov 1 00:25:20.471731 systemd[1674]: Startup finished in 490ms. Nov 1 00:25:20.472043 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:25:20.489239 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:25:20.548648 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:32918.service - OpenSSH per-connection server daemon (10.0.0.1:32918). Nov 1 00:25:20.605721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:25:20.632735 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 32918 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:20.635577 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:20.640930 systemd-logind[1574]: New session 2 of user core. Nov 1 00:25:20.650986 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:25:20.714848 sshd[1694]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:20.725715 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:32932.service - OpenSSH per-connection server daemon (10.0.0.1:32932). Nov 1 00:25:20.728792 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:32918.service: Deactivated successfully. Nov 1 00:25:20.732808 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:25:20.733738 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:25:20.736245 systemd-logind[1574]: Removed session 2. Nov 1 00:25:20.763539 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 32932 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:20.766184 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:20.772424 systemd-logind[1574]: New session 3 of user core. Nov 1 00:25:20.792296 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:25:20.855000 sshd[1701]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:20.860641 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:32932.service: Deactivated successfully. Nov 1 00:25:20.864211 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:25:20.864564 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:25:20.866020 systemd-logind[1574]: Removed session 3. Nov 1 00:25:21.616717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:21.619205 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:25:21.621608 systemd[1]: Startup finished in 10.058s (kernel) + 7.093s (userspace) = 17.152s. Nov 1 00:25:21.653139 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:25:22.812498 kubelet[1720]: E1101 00:25:22.812317 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:25:22.816966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:25:22.817301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:25:30.802062 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:45556.service - OpenSSH per-connection server daemon (10.0.0.1:45556). Nov 1 00:25:30.831268 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 45556 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:30.832891 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:30.837085 systemd-logind[1574]: New session 4 of user core. Nov 1 00:25:30.850605 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:25:30.906466 sshd[1734]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:30.919592 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:45572.service - OpenSSH per-connection server daemon (10.0.0.1:45572). Nov 1 00:25:30.920420 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:45556.service: Deactivated successfully. Nov 1 00:25:30.922394 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:25:30.923111 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:25:30.924455 systemd-logind[1574]: Removed session 4. Nov 1 00:25:30.954196 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 45572 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:30.956461 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:30.961383 systemd-logind[1574]: New session 5 of user core. Nov 1 00:25:30.967661 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:25:31.021458 sshd[1740]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:31.038765 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:45574.service - OpenSSH per-connection server daemon (10.0.0.1:45574). Nov 1 00:25:31.039809 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:45572.service: Deactivated successfully. Nov 1 00:25:31.042855 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:25:31.043771 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:25:31.045733 systemd-logind[1574]: Removed session 5. Nov 1 00:25:31.067451 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 45574 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:31.069574 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:31.074519 systemd-logind[1574]: New session 6 of user core. Nov 1 00:25:31.087608 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:25:31.147454 sshd[1748]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:31.150768 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:45574.service: Deactivated successfully. Nov 1 00:25:31.155644 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:25:31.157104 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:25:31.166783 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). Nov 1 00:25:31.167615 systemd-logind[1574]: Removed session 6. Nov 1 00:25:31.208094 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:31.210959 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:31.215583 systemd-logind[1574]: New session 7 of user core. Nov 1 00:25:31.225636 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:25:31.291298 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:25:31.291997 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:25:31.314882 sudo[1763]: pam_unix(sudo:session): session closed for user root Nov 1 00:25:31.318297 sshd[1758]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:31.332023 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:45598.service - OpenSSH per-connection server daemon (10.0.0.1:45598). Nov 1 00:25:31.333322 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:45588.service: Deactivated successfully. Nov 1 00:25:31.337456 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:25:31.338645 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:25:31.341552 systemd-logind[1574]: Removed session 7. Nov 1 00:25:31.368798 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 45598 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:31.370859 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:31.375653 systemd-logind[1574]: New session 8 of user core. Nov 1 00:25:31.389973 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:25:31.449484 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:25:31.449963 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:25:31.454211 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 1 00:25:31.462896 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:25:31.463404 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:25:31.489724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:25:31.492168 auditctl[1776]: No rules Nov 1 00:25:31.493711 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:25:31.494165 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:25:31.496576 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:25:31.537195 augenrules[1795]: No rules Nov 1 00:25:31.540263 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:25:31.542322 sudo[1772]: pam_unix(sudo:session): session closed for user root Nov 1 00:25:31.545200 sshd[1765]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:31.554906 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:45614.service - OpenSSH per-connection server daemon (10.0.0.1:45614). Nov 1 00:25:31.555963 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:45598.service: Deactivated successfully. Nov 1 00:25:31.560161 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:25:31.561497 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:25:31.564015 systemd-logind[1574]: Removed session 8. Nov 1 00:25:31.585309 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 45614 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:25:31.587110 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:31.591841 systemd-logind[1574]: New session 9 of user core. Nov 1 00:25:31.602623 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:25:31.659548 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:25:31.660052 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:25:32.400594 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:25:32.400846 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:25:32.841087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:25:32.873655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:33.156905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:33.240880 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:25:33.344960 kubelet[1844]: E1101 00:25:33.344888 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:25:33.352628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:25:33.354680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:25:33.406790 dockerd[1826]: time="2025-11-01T00:25:33.406707830Z" level=info msg="Starting up" Nov 1 00:25:34.099511 dockerd[1826]: time="2025-11-01T00:25:34.099450475Z" level=info msg="Loading containers: start." Nov 1 00:25:34.241367 kernel: Initializing XFRM netlink socket Nov 1 00:25:34.354653 systemd-networkd[1274]: docker0: Link UP Nov 1 00:25:34.381045 dockerd[1826]: time="2025-11-01T00:25:34.380991074Z" level=info msg="Loading containers: done." Nov 1 00:25:34.403418 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2539686205-merged.mount: Deactivated successfully. Nov 1 00:25:34.405373 dockerd[1826]: time="2025-11-01T00:25:34.405310819Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:25:34.405499 dockerd[1826]: time="2025-11-01T00:25:34.405473496Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:25:34.405639 dockerd[1826]: time="2025-11-01T00:25:34.405616630Z" level=info msg="Daemon has completed initialization" Nov 1 00:25:34.454323 dockerd[1826]: time="2025-11-01T00:25:34.454202585Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:25:34.454461 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:25:35.628138 containerd[1600]: time="2025-11-01T00:25:35.628073227Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:25:36.844111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433362341.mount: Deactivated successfully. Nov 1 00:25:38.190057 containerd[1600]: time="2025-11-01T00:25:38.189986474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.190856 containerd[1600]: time="2025-11-01T00:25:38.190808097Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:25:38.192126 containerd[1600]: time="2025-11-01T00:25:38.192091602Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.195485 containerd[1600]: time="2025-11-01T00:25:38.195425001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:38.196929 containerd[1600]: time="2025-11-01T00:25:38.196863277Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.568724035s" Nov 1 00:25:38.196929 containerd[1600]: time="2025-11-01T00:25:38.196912405Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:25:38.197704 containerd[1600]: time="2025-11-01T00:25:38.197677155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:25:39.533488 containerd[1600]: time="2025-11-01T00:25:39.533397604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:39.534929 containerd[1600]: time="2025-11-01T00:25:39.534876152Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:25:39.536437 containerd[1600]: time="2025-11-01T00:25:39.536389395Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:39.539614 containerd[1600]: time="2025-11-01T00:25:39.539575562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:39.540710 containerd[1600]: time="2025-11-01T00:25:39.540672063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.342962687s" Nov 1 00:25:39.540776 containerd[1600]: time="2025-11-01T00:25:39.540708020Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:25:39.541356 containerd[1600]: time="2025-11-01T00:25:39.541235225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:25:41.043844 containerd[1600]: time="2025-11-01T00:25:41.043745253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:41.047890 containerd[1600]: time="2025-11-01T00:25:41.047826629Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:25:41.053866 containerd[1600]: time="2025-11-01T00:25:41.053814726Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:41.059158 containerd[1600]: time="2025-11-01T00:25:41.059123866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:41.060416 containerd[1600]: time="2025-11-01T00:25:41.060372701Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.519091205s" Nov 1 00:25:41.060416 containerd[1600]: time="2025-11-01T00:25:41.060412941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:25:41.061028 containerd[1600]: time="2025-11-01T00:25:41.060972709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:25:43.238693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242665532.mount: Deactivated successfully. Nov 1 00:25:43.591176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:25:43.613602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:43.788127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:43.806447 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:25:44.135270 kubelet[2080]: E1101 00:25:44.135199 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:25:44.140498 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:25:44.140964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:25:45.181822 containerd[1600]: time="2025-11-01T00:25:45.181736926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:45.183223 containerd[1600]: time="2025-11-01T00:25:45.183170911Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:25:45.185027 containerd[1600]: time="2025-11-01T00:25:45.184987300Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:45.187818 containerd[1600]: time="2025-11-01T00:25:45.187764985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:45.188638 containerd[1600]: time="2025-11-01T00:25:45.188551764Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 4.127547109s" Nov 1 00:25:45.188638 containerd[1600]: time="2025-11-01T00:25:45.188599488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:25:45.189320 containerd[1600]: time="2025-11-01T00:25:45.189290825Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:25:45.780154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483677120.mount: Deactivated successfully. Nov 1 00:25:47.148855 containerd[1600]: time="2025-11-01T00:25:47.148623158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:47.150700 containerd[1600]: time="2025-11-01T00:25:47.150613792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:25:47.153909 containerd[1600]: time="2025-11-01T00:25:47.153841907Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:47.158564 containerd[1600]: time="2025-11-01T00:25:47.158512910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:47.160362 containerd[1600]: time="2025-11-01T00:25:47.160289245Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.970958854s" Nov 1 00:25:47.160362 containerd[1600]: time="2025-11-01T00:25:47.160357476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:25:47.161248 containerd[1600]: time="2025-11-01T00:25:47.161015655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:25:48.126683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043501476.mount: Deactivated successfully. Nov 1 00:25:48.136103 containerd[1600]: time="2025-11-01T00:25:48.136009573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:48.136921 containerd[1600]: time="2025-11-01T00:25:48.136812377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:25:48.138357 containerd[1600]: time="2025-11-01T00:25:48.138289247Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:48.143098 containerd[1600]: time="2025-11-01T00:25:48.142977753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:48.144881 containerd[1600]: time="2025-11-01T00:25:48.144789377Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 983.718488ms" Nov 1 00:25:48.145008 containerd[1600]: time="2025-11-01T00:25:48.144882300Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:25:48.145864 containerd[1600]: time="2025-11-01T00:25:48.145806872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:25:48.666904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630157277.mount: Deactivated successfully. Nov 1 00:25:54.341086 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:25:54.354597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:54.575822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:54.580844 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:25:54.640625 kubelet[2213]: E1101 00:25:54.640373 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:25:54.645964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:25:54.646232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:25:56.483602 containerd[1600]: time="2025-11-01T00:25:56.483528475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:56.533431 containerd[1600]: time="2025-11-01T00:25:56.533292732Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:25:56.575291 containerd[1600]: time="2025-11-01T00:25:56.575213635Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:56.659262 containerd[1600]: time="2025-11-01T00:25:56.659179685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:25:56.661212 containerd[1600]: time="2025-11-01T00:25:56.661141412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.5152783s" Nov 1 00:25:56.661212 containerd[1600]: time="2025-11-01T00:25:56.661206546Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:25:58.824137 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:58.837644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:58.868662 systemd[1]: Reloading requested from client PID 2255 ('systemctl') (unit session-9.scope)... Nov 1 00:25:58.868683 systemd[1]: Reloading... Nov 1 00:25:58.954903 zram_generator::config[2294]: No configuration found. Nov 1 00:25:59.233781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:25:59.334669 systemd[1]: Reloading finished in 465 ms. Nov 1 00:25:59.401398 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:25:59.401616 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:25:59.402210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:59.405313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:25:59.596485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:25:59.603791 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:25:59.652679 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:25:59.652679 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:25:59.652679 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:25:59.653215 kubelet[2355]: I1101 00:25:59.652759 2355 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:25:59.845692 kubelet[2355]: I1101 00:25:59.845635 2355 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:25:59.845692 kubelet[2355]: I1101 00:25:59.845674 2355 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:25:59.846008 kubelet[2355]: I1101 00:25:59.845981 2355 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:25:59.869502 kubelet[2355]: E1101 00:25:59.869326 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:25:59.872216 kubelet[2355]: I1101 00:25:59.872172 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:25:59.879426 kubelet[2355]: E1101 00:25:59.879367 2355 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:25:59.879426 kubelet[2355]: I1101 00:25:59.879408 2355 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:25:59.886710 kubelet[2355]: I1101 00:25:59.886655 2355 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:25:59.887403 kubelet[2355]: I1101 00:25:59.887308 2355 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:25:59.887618 kubelet[2355]: I1101 00:25:59.887360 2355 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:25:59.887618 kubelet[2355]: I1101 00:25:59.887615 2355 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:25:59.887816 kubelet[2355]: I1101 00:25:59.887628 2355 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:25:59.887853 kubelet[2355]: I1101 00:25:59.887815 2355 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:25:59.891168 kubelet[2355]: I1101 00:25:59.891134 2355 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:25:59.891228 kubelet[2355]: I1101 00:25:59.891193 2355 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:25:59.891228 kubelet[2355]: I1101 00:25:59.891218 2355 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:25:59.891305 kubelet[2355]: I1101 00:25:59.891234 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:25:59.896219 kubelet[2355]: I1101 00:25:59.895553 2355 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:25:59.896219 kubelet[2355]: I1101 00:25:59.895975 2355 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:25:59.896219 kubelet[2355]: W1101 00:25:59.896044 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:25:59.896219 kubelet[2355]: E1101 00:25:59.896126 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:25:59.896635 kubelet[2355]: W1101 00:25:59.896600 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:25:59.896875 kubelet[2355]: W1101 00:25:59.896826 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:25:59.896941 kubelet[2355]: E1101 00:25:59.896889 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:25:59.898681 kubelet[2355]: I1101 00:25:59.898651 2355 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:25:59.898735 kubelet[2355]: I1101 00:25:59.898703 2355 server.go:1287] "Started kubelet" Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.900069 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.900643 2355 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.900822 2355 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.900838 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.901112 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:25:59.902068 kubelet[2355]: I1101 00:25:59.901933 2355 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:25:59.903433 kubelet[2355]: E1101 00:25:59.903393 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:25:59.903484 kubelet[2355]: I1101 00:25:59.903455 2355 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:25:59.904128 kubelet[2355]: I1101 00:25:59.903645 2355 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:25:59.904128 kubelet[2355]: I1101 00:25:59.903740 2355 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:25:59.904270 kubelet[2355]: W1101 00:25:59.904203 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:25:59.904270 kubelet[2355]: E1101 00:25:59.904261 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:25:59.904830 kubelet[2355]: I1101 00:25:59.904793 2355 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:25:59.904918 kubelet[2355]: I1101 00:25:59.904898 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:25:59.905303 kubelet[2355]: E1101 00:25:59.903060 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba567f5bc8e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:25:59.898671333 +0000 UTC m=+0.289902967,LastTimestamp:2025-11-01 00:25:59.898671333 +0000 UTC m=+0.289902967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:25:59.905595 kubelet[2355]: E1101 00:25:59.905551 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Nov 1 00:25:59.906048 kubelet[2355]: E1101 00:25:59.906016 2355 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:25:59.906899 kubelet[2355]: I1101 00:25:59.906862 2355 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:25:59.925958 kubelet[2355]: I1101 00:25:59.925766 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:25:59.928489 kubelet[2355]: I1101 00:25:59.928244 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:25:59.928489 kubelet[2355]: I1101 00:25:59.928311 2355 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:25:59.928602 kubelet[2355]: I1101 00:25:59.928573 2355 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:25:59.928602 kubelet[2355]: I1101 00:25:59.928591 2355 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:25:59.928723 kubelet[2355]: E1101 00:25:59.928661 2355 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:25:59.929378 kubelet[2355]: W1101 00:25:59.929302 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:25:59.929429 kubelet[2355]: E1101 00:25:59.929375 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:25:59.933258 kubelet[2355]: I1101 00:25:59.933212 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:25:59.933258 kubelet[2355]: I1101 00:25:59.933240 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:25:59.933258 kubelet[2355]: I1101 00:25:59.933258 2355 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:00.004024 kubelet[2355]: E1101 00:26:00.003953 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:00.029581 kubelet[2355]: E1101 00:26:00.029518 2355 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:26:00.104836 kubelet[2355]: E1101 00:26:00.104776 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:00.106486 kubelet[2355]: E1101 00:26:00.106445 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Nov 1 00:26:00.205772 kubelet[2355]: E1101 00:26:00.205588 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:00.229937 kubelet[2355]: E1101 00:26:00.229858 2355 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:26:00.306425 kubelet[2355]: E1101 00:26:00.306360 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:26:00.344825 kubelet[2355]: I1101 00:26:00.344759 2355 policy_none.go:49] "None policy: Start" Nov 1 00:26:00.344825 kubelet[2355]: I1101 00:26:00.344817 2355 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:26:00.345028 kubelet[2355]: I1101 00:26:00.344848 2355 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:26:00.353990 kubelet[2355]: I1101 00:26:00.353955 2355 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:26:00.354282 kubelet[2355]: I1101 00:26:00.354236 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:26:00.354351 kubelet[2355]: I1101 00:26:00.354255 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:26:00.355572 kubelet[2355]: I1101 00:26:00.355458 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:26:00.356301 kubelet[2355]: E1101 00:26:00.356220 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:26:00.356301 kubelet[2355]: E1101 00:26:00.356282 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:26:00.456208 kubelet[2355]: I1101 00:26:00.456049 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:00.456507 kubelet[2355]: E1101 00:26:00.456474 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 1 00:26:00.507548 kubelet[2355]: E1101 00:26:00.507473 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Nov 1 00:26:00.636546 kubelet[2355]: E1101 00:26:00.636495 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:00.637646 kubelet[2355]: E1101 00:26:00.637614 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:00.639835 kubelet[2355]: E1101 00:26:00.639804 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:00.658559 kubelet[2355]: I1101 00:26:00.658511 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:00.659172 kubelet[2355]: E1101 00:26:00.658958 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 1 00:26:00.709140 kubelet[2355]: I1101 00:26:00.708959 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:00.709140 kubelet[2355]: I1101 00:26:00.709047 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:00.709140 kubelet[2355]: I1101 00:26:00.709087 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:00.709140 kubelet[2355]: I1101 00:26:00.709112 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:00.709140 kubelet[2355]: I1101 00:26:00.709133 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:00.709444 kubelet[2355]: I1101 00:26:00.709153 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:00.709444 kubelet[2355]: I1101 00:26:00.709256 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:00.709444 kubelet[2355]: I1101 00:26:00.709312 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:00.709444 kubelet[2355]: I1101 00:26:00.709357 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:00.937863 kubelet[2355]: E1101 00:26:00.937801 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:00.938047 kubelet[2355]: E1101 00:26:00.937974 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:00.938928 containerd[1600]: time="2025-11-01T00:26:00.938852846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:00.939484 containerd[1600]: time="2025-11-01T00:26:00.938893623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f527c6a4d4ee203eb24b825ac9683032,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:00.941383 kubelet[2355]: E1101 00:26:00.941279 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:00.942059 containerd[1600]: time="2025-11-01T00:26:00.942020795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:00.986760 kubelet[2355]: W1101 00:26:00.986585 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:26:00.986760 kubelet[2355]: E1101 00:26:00.986647 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:01.060860 kubelet[2355]: I1101 00:26:01.060811 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:01.061426 kubelet[2355]: E1101 00:26:01.061378 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 1 00:26:01.124758 kubelet[2355]: W1101 00:26:01.124649 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:26:01.124758 kubelet[2355]: E1101 00:26:01.124744 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:01.308751 kubelet[2355]: E1101 00:26:01.308586 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Nov 1 00:26:01.345591 kubelet[2355]: W1101 00:26:01.345537 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:26:01.345733 kubelet[2355]: E1101 00:26:01.345587 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:01.371792 kubelet[2355]: W1101 00:26:01.371711 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Nov 1 00:26:01.371792 kubelet[2355]: E1101 00:26:01.371789 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:01.478048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428238091.mount: Deactivated successfully. Nov 1 00:26:01.484497 containerd[1600]: time="2025-11-01T00:26:01.484429660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:01.485434 containerd[1600]: time="2025-11-01T00:26:01.485351418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:26:01.694945 containerd[1600]: time="2025-11-01T00:26:01.694873335Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:01.745261 containerd[1600]: time="2025-11-01T00:26:01.745181616Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:01.748992 containerd[1600]: time="2025-11-01T00:26:01.748904800Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:01.750693 containerd[1600]: time="2025-11-01T00:26:01.750602393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:26:01.751529 containerd[1600]: time="2025-11-01T00:26:01.751474108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 809.368854ms" Nov 1 00:26:01.752003 containerd[1600]: time="2025-11-01T00:26:01.751750529Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:26:01.753009 containerd[1600]: time="2025-11-01T00:26:01.752939472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:26:01.757609 containerd[1600]: time="2025-11-01T00:26:01.757553236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 818.438005ms" Nov 1 00:26:01.760888 containerd[1600]: time="2025-11-01T00:26:01.760837262Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 821.861824ms" Nov 1 00:26:01.863577 kubelet[2355]: I1101 00:26:01.863153 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:01.863577 kubelet[2355]: E1101 00:26:01.863542 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Nov 1 00:26:01.880113 containerd[1600]: time="2025-11-01T00:26:01.879596627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:01.880113 containerd[1600]: time="2025-11-01T00:26:01.879693320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:01.880113 containerd[1600]: time="2025-11-01T00:26:01.879724288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.880113 containerd[1600]: time="2025-11-01T00:26:01.879907984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.880854 containerd[1600]: time="2025-11-01T00:26:01.880774499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:01.881037 containerd[1600]: time="2025-11-01T00:26:01.881007068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:01.881135 containerd[1600]: time="2025-11-01T00:26:01.881110704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.881427 containerd[1600]: time="2025-11-01T00:26:01.881392355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.888121 containerd[1600]: time="2025-11-01T00:26:01.886016389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:01.888121 containerd[1600]: time="2025-11-01T00:26:01.886090158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:01.888121 containerd[1600]: time="2025-11-01T00:26:01.886114734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.888121 containerd[1600]: time="2025-11-01T00:26:01.886269015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:01.955677 containerd[1600]: time="2025-11-01T00:26:01.955488542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"487d1749a9d5b526868c632cf38c8200f65c90122b356cc273cded0ee3f56de9\"" Nov 1 00:26:01.957891 kubelet[2355]: E1101 00:26:01.957669 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:01.960123 containerd[1600]: time="2025-11-01T00:26:01.960077720Z" level=info msg="CreateContainer within sandbox \"487d1749a9d5b526868c632cf38c8200f65c90122b356cc273cded0ee3f56de9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:26:01.961308 containerd[1600]: time="2025-11-01T00:26:01.961068780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f527c6a4d4ee203eb24b825ac9683032,Namespace:kube-system,Attempt:0,} returns sandbox id \"0def4f3db9bff6bb0a4eba54a2431204b8474d27f8d256983073dc1ed364ccff\"" Nov 1 00:26:01.963566 kubelet[2355]: E1101 00:26:01.963545 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:01.965147 containerd[1600]: time="2025-11-01T00:26:01.964943529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6eda328735c69b531c66b93e7abae48cc3e1447247cfd5599a9e71be441951\"" Nov 1 00:26:01.965905 containerd[1600]: time="2025-11-01T00:26:01.965879886Z" level=info msg="CreateContainer within sandbox \"0def4f3db9bff6bb0a4eba54a2431204b8474d27f8d256983073dc1ed364ccff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:26:01.966644 kubelet[2355]: E1101 00:26:01.966152 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:01.967955 containerd[1600]: time="2025-11-01T00:26:01.967933070Z" level=info msg="CreateContainer within sandbox \"be6eda328735c69b531c66b93e7abae48cc3e1447247cfd5599a9e71be441951\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:26:01.977695 kubelet[2355]: E1101 00:26:01.977651 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:26:01.985156 containerd[1600]: time="2025-11-01T00:26:01.985109454Z" level=info msg="CreateContainer within sandbox \"487d1749a9d5b526868c632cf38c8200f65c90122b356cc273cded0ee3f56de9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1166c37b599e0c46cd8eb14f3208e5220d340c56429225147f6bcd67f6c69afc\"" Nov 1 00:26:01.986390 containerd[1600]: time="2025-11-01T00:26:01.986305260Z" level=info msg="StartContainer for \"1166c37b599e0c46cd8eb14f3208e5220d340c56429225147f6bcd67f6c69afc\"" Nov 1 00:26:01.995599 containerd[1600]: time="2025-11-01T00:26:01.995519053Z" level=info msg="CreateContainer within sandbox \"0def4f3db9bff6bb0a4eba54a2431204b8474d27f8d256983073dc1ed364ccff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b15536f07e3bc14b6a3109198307bfa255aa8b620f11828487de66e30ec1bef3\"" Nov 1 00:26:01.996465 containerd[1600]: time="2025-11-01T00:26:01.996292932Z" level=info msg="StartContainer for \"b15536f07e3bc14b6a3109198307bfa255aa8b620f11828487de66e30ec1bef3\"" Nov 1 00:26:02.002199 containerd[1600]: time="2025-11-01T00:26:02.002142918Z" level=info msg="CreateContainer within sandbox \"be6eda328735c69b531c66b93e7abae48cc3e1447247cfd5599a9e71be441951\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19a84435883be31cf109dff946813f935a50e6f8c5b668073e8cda8e6c74f557\"" Nov 1 00:26:02.004411 containerd[1600]: time="2025-11-01T00:26:02.002976720Z" level=info msg="StartContainer for \"19a84435883be31cf109dff946813f935a50e6f8c5b668073e8cda8e6c74f557\"" Nov 1 00:26:02.090705 containerd[1600]: time="2025-11-01T00:26:02.090634753Z" level=info msg="StartContainer for \"1166c37b599e0c46cd8eb14f3208e5220d340c56429225147f6bcd67f6c69afc\" returns successfully" Nov 1 00:26:02.090865 containerd[1600]: time="2025-11-01T00:26:02.090777662Z" level=info msg="StartContainer for \"b15536f07e3bc14b6a3109198307bfa255aa8b620f11828487de66e30ec1bef3\" returns successfully" Nov 1 00:26:02.104433 containerd[1600]: time="2025-11-01T00:26:02.104373023Z" level=info msg="StartContainer for \"19a84435883be31cf109dff946813f935a50e6f8c5b668073e8cda8e6c74f557\" returns successfully" Nov 1 00:26:02.983346 kubelet[2355]: E1101 00:26:02.981991 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:02.983346 kubelet[2355]: E1101 00:26:02.982133 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:02.992702 kubelet[2355]: E1101 00:26:02.989009 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:02.992702 kubelet[2355]: E1101 00:26:02.989122 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:02.993723 kubelet[2355]: E1101 00:26:02.993696 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:26:02.993832 kubelet[2355]: E1101 00:26:02.993810 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:03.084655 kubelet[2355]: E1101 00:26:03.084572 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:26:03.465449 kubelet[2355]: I1101 00:26:03.465404 2355 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:03.497524 kubelet[2355]: I1101 00:26:03.497469 2355 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:26:03.497524 kubelet[2355]: E1101 00:26:03.497524 2355 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:26:03.505767 kubelet[2355]: I1101 00:26:03.505697 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:04.037785 kubelet[2355]: E1101 00:26:03.537750 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:04.037785 kubelet[2355]: I1101 00:26:03.537793 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:04.037785 kubelet[2355]: E1101 00:26:03.541077 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:04.037785 kubelet[2355]: I1101 00:26:03.541131 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:04.037785 kubelet[2355]: I1101 00:26:04.037359 2355 apiserver.go:52] "Watching apiserver" Nov 1 00:26:04.040991 kubelet[2355]: I1101 00:26:04.040969 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:04.041662 kubelet[2355]: I1101 00:26:04.041572 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:04.051056 kubelet[2355]: E1101 00:26:04.050988 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:04.051377 kubelet[2355]: E1101 00:26:04.051314 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:04.051435 kubelet[2355]: E1101 00:26:04.051028 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:04.053147 kubelet[2355]: E1101 00:26:04.052885 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:04.053147 kubelet[2355]: E1101 00:26:04.053114 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:04.104926 kubelet[2355]: I1101 00:26:04.103869 2355 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:26:04.480241 update_engine[1581]: I20251101 00:26:04.480165 1581 update_attempter.cc:509] Updating boot flags... Nov 1 00:26:04.697384 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2633) Nov 1 00:26:04.713992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2637) Nov 1 00:26:04.754436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2637) Nov 1 00:26:05.479082 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-9.scope)... Nov 1 00:26:05.479102 systemd[1]: Reloading... Nov 1 00:26:05.563389 zram_generator::config[2682]: No configuration found. Nov 1 00:26:05.698129 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:26:05.787352 systemd[1]: Reloading finished in 307 ms. Nov 1 00:26:05.826600 kubelet[2355]: I1101 00:26:05.825990 2355 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:26:05.826089 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:05.845983 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:26:05.846508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:05.858630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:26:06.031425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:26:06.037475 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:26:06.079678 kubelet[2736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:06.079678 kubelet[2736]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:26:06.079678 kubelet[2736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:26:06.080102 kubelet[2736]: I1101 00:26:06.079687 2736 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:26:06.088288 kubelet[2736]: I1101 00:26:06.088232 2736 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:26:06.088288 kubelet[2736]: I1101 00:26:06.088274 2736 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:26:06.088700 kubelet[2736]: I1101 00:26:06.088669 2736 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:26:06.090464 kubelet[2736]: I1101 00:26:06.090433 2736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:26:06.209493 kubelet[2736]: I1101 00:26:06.209434 2736 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:26:06.214026 kubelet[2736]: E1101 00:26:06.213986 2736 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:26:06.214026 kubelet[2736]: I1101 00:26:06.214024 2736 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:26:06.219255 kubelet[2736]: I1101 00:26:06.219230 2736 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:26:06.219833 kubelet[2736]: I1101 00:26:06.219791 2736 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:26:06.219997 kubelet[2736]: I1101 00:26:06.219822 2736 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:26:06.220103 kubelet[2736]: I1101 00:26:06.220007 2736 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:26:06.220103 kubelet[2736]: I1101 00:26:06.220017 2736 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:26:06.220103 kubelet[2736]: I1101 00:26:06.220074 2736 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:06.220259 kubelet[2736]: I1101 00:26:06.220245 2736 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:26:06.220292 kubelet[2736]: I1101 00:26:06.220264 2736 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:26:06.220292 kubelet[2736]: I1101 00:26:06.220285 2736 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:26:06.220367 kubelet[2736]: I1101 00:26:06.220296 2736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:26:06.221967 kubelet[2736]: I1101 00:26:06.221372 2736 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:26:06.221967 kubelet[2736]: I1101 00:26:06.221845 2736 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.222392 2736 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.222425 2736 server.go:1287] "Started kubelet" Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.223045 2736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.223607 2736 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.223673 2736 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:26:06.226129 kubelet[2736]: I1101 00:26:06.225052 2736 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:26:06.226129 kubelet[2736]: E1101 00:26:06.225520 2736 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:26:06.228422 kubelet[2736]: I1101 00:26:06.226505 2736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:26:06.228422 kubelet[2736]: I1101 00:26:06.226730 2736 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:26:06.228752 kubelet[2736]: I1101 00:26:06.228728 2736 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:26:06.230442 kubelet[2736]: I1101 00:26:06.228824 2736 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:26:06.230442 kubelet[2736]: I1101 00:26:06.228992 2736 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:26:06.235900 kubelet[2736]: I1101 00:26:06.235859 2736 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:26:06.236062 kubelet[2736]: I1101 00:26:06.235949 2736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:26:06.237783 kubelet[2736]: I1101 00:26:06.237759 2736 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:26:06.245589 kubelet[2736]: I1101 00:26:06.245538 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:26:06.247532 kubelet[2736]: I1101 00:26:06.247511 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:26:06.247573 kubelet[2736]: I1101 00:26:06.247546 2736 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:26:06.247573 kubelet[2736]: I1101 00:26:06.247568 2736 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:26:06.247624 kubelet[2736]: I1101 00:26:06.247575 2736 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:26:06.247646 kubelet[2736]: E1101 00:26:06.247624 2736 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:26:06.283898 kubelet[2736]: I1101 00:26:06.283858 2736 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:26:06.283898 kubelet[2736]: I1101 00:26:06.283878 2736 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:26:06.283898 kubelet[2736]: I1101 00:26:06.283899 2736 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:26:06.284103 kubelet[2736]: I1101 00:26:06.284067 2736 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:26:06.284103 kubelet[2736]: I1101 00:26:06.284078 2736 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:26:06.284103 kubelet[2736]: I1101 00:26:06.284096 2736 policy_none.go:49] "None policy: Start" Nov 1 00:26:06.284103 kubelet[2736]: I1101 00:26:06.284105 2736 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:26:06.284281 kubelet[2736]: I1101 00:26:06.284116 2736 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:26:06.284941 kubelet[2736]: I1101 00:26:06.284457 2736 state_mem.go:75] "Updated machine memory state" Nov 1 00:26:06.287457 kubelet[2736]: I1101 00:26:06.286122 2736 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:26:06.287457 kubelet[2736]: I1101 00:26:06.286432 2736 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:26:06.287457 kubelet[2736]: I1101 00:26:06.286446 2736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:26:06.287457 kubelet[2736]: I1101 00:26:06.286639 2736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:26:06.287457 kubelet[2736]: E1101 00:26:06.287375 2736 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:26:06.349154 kubelet[2736]: I1101 00:26:06.349075 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:06.349154 kubelet[2736]: I1101 00:26:06.349075 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:06.349367 kubelet[2736]: I1101 00:26:06.349293 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.397999 kubelet[2736]: I1101 00:26:06.397944 2736 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:26:06.411359 kubelet[2736]: I1101 00:26:06.411271 2736 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:26:06.411546 kubelet[2736]: I1101 00:26:06.411386 2736 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:26:06.530401 kubelet[2736]: I1101 00:26:06.530290 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:06.530401 kubelet[2736]: I1101 00:26:06.530380 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.530401 kubelet[2736]: I1101 00:26:06.530408 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.531058 kubelet[2736]: I1101 00:26:06.530435 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:06.531058 kubelet[2736]: I1101 00:26:06.530457 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.531058 kubelet[2736]: I1101 00:26:06.530482 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:06.531058 kubelet[2736]: I1101 00:26:06.530525 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f527c6a4d4ee203eb24b825ac9683032-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f527c6a4d4ee203eb24b825ac9683032\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:26:06.531058 kubelet[2736]: I1101 00:26:06.530593 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.531262 kubelet[2736]: I1101 00:26:06.530662 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:26:06.710513 kubelet[2736]: E1101 00:26:06.710324 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:06.710513 kubelet[2736]: E1101 00:26:06.710432 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:06.710513 kubelet[2736]: E1101 00:26:06.710463 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:07.220999 kubelet[2736]: I1101 00:26:07.220919 2736 apiserver.go:52] "Watching apiserver" Nov 1 00:26:07.229599 kubelet[2736]: I1101 00:26:07.229563 2736 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:26:07.260392 kubelet[2736]: I1101 00:26:07.259278 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:07.260392 kubelet[2736]: E1101 00:26:07.259572 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:07.260392 kubelet[2736]: E1101 00:26:07.259635 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:07.478295 kubelet[2736]: E1101 00:26:07.477961 2736 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:26:07.478833 kubelet[2736]: E1101 00:26:07.478801 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:07.980192 kubelet[2736]: I1101 00:26:07.980097 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.980075749 podStartE2EDuration="1.980075749s" podCreationTimestamp="2025-11-01 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:07.979994487 +0000 UTC m=+1.938103185" watchObservedRunningTime="2025-11-01 00:26:07.980075749 +0000 UTC m=+1.938184447" Nov 1 00:26:08.039707 kubelet[2736]: I1101 00:26:08.039606 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.039585058 podStartE2EDuration="2.039585058s" podCreationTimestamp="2025-11-01 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:08.03926164 +0000 UTC m=+1.997370348" watchObservedRunningTime="2025-11-01 00:26:08.039585058 +0000 UTC m=+1.997693766" Nov 1 00:26:08.133930 kubelet[2736]: I1101 00:26:08.133831 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.1336933670000002 podStartE2EDuration="2.133693367s" podCreationTimestamp="2025-11-01 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:08.1019146 +0000 UTC m=+2.060023298" watchObservedRunningTime="2025-11-01 00:26:08.133693367 +0000 UTC m=+2.091802065" Nov 1 00:26:08.260476 kubelet[2736]: E1101 00:26:08.260305 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:08.260476 kubelet[2736]: E1101 00:26:08.260401 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:11.007717 kubelet[2736]: E1101 00:26:11.007659 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:11.667258 kubelet[2736]: I1101 00:26:11.667213 2736 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:26:11.667724 containerd[1600]: time="2025-11-01T00:26:11.667666834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:26:11.668162 kubelet[2736]: I1101 00:26:11.667952 2736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:26:12.263295 kubelet[2736]: I1101 00:26:12.263066 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-xtables-lock\") pod \"kube-proxy-5r8k8\" (UID: \"b32e9fe6-c2fa-491b-a704-ce3f6867aa22\") " pod="kube-system/kube-proxy-5r8k8" Nov 1 00:26:12.263295 kubelet[2736]: I1101 00:26:12.263136 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-kube-proxy\") pod \"kube-proxy-5r8k8\" (UID: \"b32e9fe6-c2fa-491b-a704-ce3f6867aa22\") " pod="kube-system/kube-proxy-5r8k8" Nov 1 00:26:12.263295 kubelet[2736]: I1101 00:26:12.263192 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-lib-modules\") pod \"kube-proxy-5r8k8\" (UID: \"b32e9fe6-c2fa-491b-a704-ce3f6867aa22\") " pod="kube-system/kube-proxy-5r8k8" Nov 1 00:26:12.263295 kubelet[2736]: I1101 00:26:12.263215 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp9wh\" (UniqueName: \"kubernetes.io/projected/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-kube-api-access-dp9wh\") pod \"kube-proxy-5r8k8\" (UID: \"b32e9fe6-c2fa-491b-a704-ce3f6867aa22\") " pod="kube-system/kube-proxy-5r8k8" Nov 1 00:26:12.475704 kubelet[2736]: E1101 00:26:12.475649 2736 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:26:12.475704 kubelet[2736]: E1101 00:26:12.475713 2736 projected.go:194] Error preparing data for projected volume kube-api-access-dp9wh for pod kube-system/kube-proxy-5r8k8: configmap "kube-root-ca.crt" not found Nov 1 00:26:12.476052 kubelet[2736]: E1101 00:26:12.475788 2736 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-kube-api-access-dp9wh podName:b32e9fe6-c2fa-491b-a704-ce3f6867aa22 nodeName:}" failed. No retries permitted until 2025-11-01 00:26:12.975767195 +0000 UTC m=+6.933875893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dp9wh" (UniqueName: "kubernetes.io/projected/b32e9fe6-c2fa-491b-a704-ce3f6867aa22-kube-api-access-dp9wh") pod "kube-proxy-5r8k8" (UID: "b32e9fe6-c2fa-491b-a704-ce3f6867aa22") : configmap "kube-root-ca.crt" not found Nov 1 00:26:12.529685 kubelet[2736]: E1101 00:26:12.529526 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:12.665134 kubelet[2736]: I1101 00:26:12.665059 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dabb8f7a-e39e-4ad8-ae74-f4004708f016-var-lib-calico\") pod \"tigera-operator-7dcd859c48-m9mgt\" (UID: \"dabb8f7a-e39e-4ad8-ae74-f4004708f016\") " pod="tigera-operator/tigera-operator-7dcd859c48-m9mgt" Nov 1 00:26:12.665134 kubelet[2736]: I1101 00:26:12.665120 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrnk\" (UniqueName: \"kubernetes.io/projected/dabb8f7a-e39e-4ad8-ae74-f4004708f016-kube-api-access-rkrnk\") pod \"tigera-operator-7dcd859c48-m9mgt\" (UID: \"dabb8f7a-e39e-4ad8-ae74-f4004708f016\") " pod="tigera-operator/tigera-operator-7dcd859c48-m9mgt" Nov 1 00:26:12.890087 containerd[1600]: time="2025-11-01T00:26:12.890008579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m9mgt,Uid:dabb8f7a-e39e-4ad8-ae74-f4004708f016,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:26:12.952192 containerd[1600]: time="2025-11-01T00:26:12.952028927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:12.952192 containerd[1600]: time="2025-11-01T00:26:12.952146998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:12.952192 containerd[1600]: time="2025-11-01T00:26:12.952165713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:12.952433 containerd[1600]: time="2025-11-01T00:26:12.952308532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:13.014896 containerd[1600]: time="2025-11-01T00:26:13.014846660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-m9mgt,Uid:dabb8f7a-e39e-4ad8-ae74-f4004708f016,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38e0912f786c5584be08acac83457361f1fc00c36a61ebb5e604f9a03172cd40\"" Nov 1 00:26:13.016954 containerd[1600]: time="2025-11-01T00:26:13.016902819Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:26:13.152022 kubelet[2736]: E1101 00:26:13.151855 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:13.152606 containerd[1600]: time="2025-11-01T00:26:13.152559464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r8k8,Uid:b32e9fe6-c2fa-491b-a704-ce3f6867aa22,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:13.177594 containerd[1600]: time="2025-11-01T00:26:13.176516822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:13.177594 containerd[1600]: time="2025-11-01T00:26:13.177559054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:13.177594 containerd[1600]: time="2025-11-01T00:26:13.177575815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:13.177838 containerd[1600]: time="2025-11-01T00:26:13.177677886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:13.228608 containerd[1600]: time="2025-11-01T00:26:13.228559667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5r8k8,Uid:b32e9fe6-c2fa-491b-a704-ce3f6867aa22,Namespace:kube-system,Attempt:0,} returns sandbox id \"4016f48222d390a759bf0b3f402f497cbbe00c6d39fe732cd686e8586b6ae7b6\"" Nov 1 00:26:13.229294 kubelet[2736]: E1101 00:26:13.229264 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:13.232104 containerd[1600]: time="2025-11-01T00:26:13.231513165Z" level=info msg="CreateContainer within sandbox \"4016f48222d390a759bf0b3f402f497cbbe00c6d39fe732cd686e8586b6ae7b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:26:13.248696 containerd[1600]: time="2025-11-01T00:26:13.248655380Z" level=info msg="CreateContainer within sandbox \"4016f48222d390a759bf0b3f402f497cbbe00c6d39fe732cd686e8586b6ae7b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6889660fe3804d24bff8bf9a1083e7899ed215d304a010015874b6d9818e0cd5\"" Nov 1 00:26:13.250558 containerd[1600]: time="2025-11-01T00:26:13.249221515Z" level=info msg="StartContainer for \"6889660fe3804d24bff8bf9a1083e7899ed215d304a010015874b6d9818e0cd5\"" Nov 1 00:26:13.272017 kubelet[2736]: E1101 00:26:13.271904 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:13.329113 containerd[1600]: time="2025-11-01T00:26:13.329008054Z" level=info msg="StartContainer for \"6889660fe3804d24bff8bf9a1083e7899ed215d304a010015874b6d9818e0cd5\" returns successfully" Nov 1 00:26:14.007032 kubelet[2736]: E1101 00:26:14.006990 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:14.274193 kubelet[2736]: E1101 00:26:14.273643 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:14.274193 kubelet[2736]: E1101 00:26:14.273702 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:14.274193 kubelet[2736]: E1101 00:26:14.273832 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:15.275145 kubelet[2736]: E1101 00:26:15.275091 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:15.275711 kubelet[2736]: E1101 00:26:15.275464 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:17.336586 kubelet[2736]: I1101 00:26:17.336472 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5r8k8" podStartSLOduration=5.336445143 podStartE2EDuration="5.336445143s" podCreationTimestamp="2025-11-01 00:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:26:17.335544819 +0000 UTC m=+11.293653518" watchObservedRunningTime="2025-11-01 00:26:17.336445143 +0000 UTC m=+11.294553861" Nov 1 00:26:19.355956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892349778.mount: Deactivated successfully. Nov 1 00:26:20.651955 containerd[1600]: time="2025-11-01T00:26:20.651883361Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:20.652841 containerd[1600]: time="2025-11-01T00:26:20.652803000Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:26:20.654205 containerd[1600]: time="2025-11-01T00:26:20.654147598Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:20.656445 containerd[1600]: time="2025-11-01T00:26:20.656414661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:20.659349 containerd[1600]: time="2025-11-01T00:26:20.657671954Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 7.640722228s" Nov 1 00:26:20.659349 containerd[1600]: time="2025-11-01T00:26:20.657708182Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:26:20.661263 containerd[1600]: time="2025-11-01T00:26:20.661230004Z" level=info msg="CreateContainer within sandbox \"38e0912f786c5584be08acac83457361f1fc00c36a61ebb5e604f9a03172cd40\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:26:20.682769 containerd[1600]: time="2025-11-01T00:26:20.682708025Z" level=info msg="CreateContainer within sandbox \"38e0912f786c5584be08acac83457361f1fc00c36a61ebb5e604f9a03172cd40\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebbccf96e736df3213e72d763b1ff96272b499baab32b0b717e242f5f755de07\"" Nov 1 00:26:20.683308 containerd[1600]: time="2025-11-01T00:26:20.683279208Z" level=info msg="StartContainer for \"ebbccf96e736df3213e72d763b1ff96272b499baab32b0b717e242f5f755de07\"" Nov 1 00:26:20.740183 containerd[1600]: time="2025-11-01T00:26:20.740045454Z" level=info msg="StartContainer for \"ebbccf96e736df3213e72d763b1ff96272b499baab32b0b717e242f5f755de07\" returns successfully" Nov 1 00:26:21.012878 kubelet[2736]: E1101 00:26:21.012711 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:21.297885 kubelet[2736]: I1101 00:26:21.297604 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-m9mgt" podStartSLOduration=1.6540990359999999 podStartE2EDuration="9.297581644s" podCreationTimestamp="2025-11-01 00:26:12 +0000 UTC" firstStartedPulling="2025-11-01 00:26:13.016498648 +0000 UTC m=+6.974607346" lastFinishedPulling="2025-11-01 00:26:20.659981256 +0000 UTC m=+14.618089954" observedRunningTime="2025-11-01 00:26:21.297038673 +0000 UTC m=+15.255147381" watchObservedRunningTime="2025-11-01 00:26:21.297581644 +0000 UTC m=+15.255690362" Nov 1 00:26:26.131576 sudo[1808]: pam_unix(sudo:session): session closed for user root Nov 1 00:26:26.135266 sshd[1801]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:26.139609 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:45614.service: Deactivated successfully. Nov 1 00:26:26.145712 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:26:26.146782 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:26:26.148215 systemd-logind[1574]: Removed session 9. Nov 1 00:26:30.777716 kubelet[2736]: I1101 00:26:30.777459 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/836332b9-627b-4dee-aaa5-aa99a4d8e142-typha-certs\") pod \"calico-typha-65b9f84544-bpd74\" (UID: \"836332b9-627b-4dee-aaa5-aa99a4d8e142\") " pod="calico-system/calico-typha-65b9f84544-bpd74" Nov 1 00:26:30.777716 kubelet[2736]: I1101 00:26:30.777527 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/836332b9-627b-4dee-aaa5-aa99a4d8e142-tigera-ca-bundle\") pod \"calico-typha-65b9f84544-bpd74\" (UID: \"836332b9-627b-4dee-aaa5-aa99a4d8e142\") " pod="calico-system/calico-typha-65b9f84544-bpd74" Nov 1 00:26:30.777716 kubelet[2736]: I1101 00:26:30.777547 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp9bc\" (UniqueName: \"kubernetes.io/projected/836332b9-627b-4dee-aaa5-aa99a4d8e142-kube-api-access-lp9bc\") pod \"calico-typha-65b9f84544-bpd74\" (UID: \"836332b9-627b-4dee-aaa5-aa99a4d8e142\") " pod="calico-system/calico-typha-65b9f84544-bpd74" Nov 1 00:26:30.878883 kubelet[2736]: I1101 00:26:30.878804 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-flexvol-driver-host\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.878883 kubelet[2736]: I1101 00:26:30.878863 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/168aa87c-0abc-4357-a215-e41bcb47e64b-node-certs\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.878883 kubelet[2736]: I1101 00:26:30.878881 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-var-lib-calico\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.878883 kubelet[2736]: I1101 00:26:30.878896 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6ktx\" (UniqueName: \"kubernetes.io/projected/168aa87c-0abc-4357-a215-e41bcb47e64b-kube-api-access-k6ktx\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.878883 kubelet[2736]: I1101 00:26:30.878914 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-var-run-calico\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879216 kubelet[2736]: I1101 00:26:30.878975 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/168aa87c-0abc-4357-a215-e41bcb47e64b-tigera-ca-bundle\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879216 kubelet[2736]: I1101 00:26:30.878992 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-cni-bin-dir\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879216 kubelet[2736]: I1101 00:26:30.879006 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-lib-modules\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879216 kubelet[2736]: I1101 00:26:30.879029 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-cni-log-dir\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879216 kubelet[2736]: I1101 00:26:30.879046 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-xtables-lock\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879390 kubelet[2736]: I1101 00:26:30.879061 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-cni-net-dir\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.879390 kubelet[2736]: I1101 00:26:30.879077 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/168aa87c-0abc-4357-a215-e41bcb47e64b-policysync\") pod \"calico-node-q9cxd\" (UID: \"168aa87c-0abc-4357-a215-e41bcb47e64b\") " pod="calico-system/calico-node-q9cxd" Nov 1 00:26:30.894792 kubelet[2736]: E1101 00:26:30.894734 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:30.895528 containerd[1600]: time="2025-11-01T00:26:30.895484620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b9f84544-bpd74,Uid:836332b9-627b-4dee-aaa5-aa99a4d8e142,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:30.948865 containerd[1600]: time="2025-11-01T00:26:30.948394458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:30.948865 containerd[1600]: time="2025-11-01T00:26:30.948509744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:30.948865 containerd[1600]: time="2025-11-01T00:26:30.948532798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:30.948865 containerd[1600]: time="2025-11-01T00:26:30.948730359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:30.950356 kubelet[2736]: E1101 00:26:30.950260 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:30.980198 kubelet[2736]: I1101 00:26:30.980145 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f97e1baa-80d7-4279-b761-fdf55a406885-varrun\") pod \"csi-node-driver-lhvvn\" (UID: \"f97e1baa-80d7-4279-b761-fdf55a406885\") " pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:30.980198 kubelet[2736]: I1101 00:26:30.980197 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56mv\" (UniqueName: \"kubernetes.io/projected/f97e1baa-80d7-4279-b761-fdf55a406885-kube-api-access-b56mv\") pod \"csi-node-driver-lhvvn\" (UID: \"f97e1baa-80d7-4279-b761-fdf55a406885\") " pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:30.980474 kubelet[2736]: I1101 00:26:30.980242 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f97e1baa-80d7-4279-b761-fdf55a406885-registration-dir\") pod \"csi-node-driver-lhvvn\" (UID: \"f97e1baa-80d7-4279-b761-fdf55a406885\") " pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:30.980474 kubelet[2736]: I1101 00:26:30.980288 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f97e1baa-80d7-4279-b761-fdf55a406885-kubelet-dir\") pod \"csi-node-driver-lhvvn\" (UID: \"f97e1baa-80d7-4279-b761-fdf55a406885\") " pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:30.980474 kubelet[2736]: I1101 00:26:30.980302 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f97e1baa-80d7-4279-b761-fdf55a406885-socket-dir\") pod \"csi-node-driver-lhvvn\" (UID: \"f97e1baa-80d7-4279-b761-fdf55a406885\") " pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:30.984718 kubelet[2736]: E1101 00:26:30.984670 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:30.984875 kubelet[2736]: W1101 00:26:30.984840 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:30.985040 kubelet[2736]: E1101 00:26:30.985006 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:30.985421 kubelet[2736]: E1101 00:26:30.985405 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:30.985486 kubelet[2736]: W1101 00:26:30.985473 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:30.985555 kubelet[2736]: E1101 00:26:30.985544 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:30.986062 kubelet[2736]: E1101 00:26:30.985924 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:30.986062 kubelet[2736]: W1101 00:26:30.986041 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:30.986241 kubelet[2736]: E1101 00:26:30.986069 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:30.988172 kubelet[2736]: E1101 00:26:30.988153 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:30.988172 kubelet[2736]: W1101 00:26:30.988169 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:30.988172 kubelet[2736]: E1101 00:26:30.988181 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:30.991557 kubelet[2736]: E1101 00:26:30.991532 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:30.991557 kubelet[2736]: W1101 00:26:30.991552 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:30.991654 kubelet[2736]: E1101 00:26:30.991573 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.020239 containerd[1600]: time="2025-11-01T00:26:31.020191349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b9f84544-bpd74,Uid:836332b9-627b-4dee-aaa5-aa99a4d8e142,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fb50093cafe4a0ed19fce23bb5668db44acbb3b182f1a754d6ceb568ef3c6a8\"" Nov 1 00:26:31.026830 kubelet[2736]: E1101 00:26:31.026773 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:31.030094 containerd[1600]: time="2025-11-01T00:26:31.029753614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:26:31.068878 kubelet[2736]: E1101 00:26:31.068833 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:31.069515 containerd[1600]: time="2025-11-01T00:26:31.069445815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9cxd,Uid:168aa87c-0abc-4357-a215-e41bcb47e64b,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:31.081572 kubelet[2736]: E1101 00:26:31.081524 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.081572 kubelet[2736]: W1101 00:26:31.081545 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.081572 kubelet[2736]: E1101 00:26:31.081566 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.081796 kubelet[2736]: E1101 00:26:31.081782 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.081796 kubelet[2736]: W1101 00:26:31.081794 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.081848 kubelet[2736]: E1101 00:26:31.081808 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.082072 kubelet[2736]: E1101 00:26:31.082046 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.082072 kubelet[2736]: W1101 00:26:31.082065 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.082134 kubelet[2736]: E1101 00:26:31.082088 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.082439 kubelet[2736]: E1101 00:26:31.082416 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.082469 kubelet[2736]: W1101 00:26:31.082439 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.082491 kubelet[2736]: E1101 00:26:31.082478 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.082775 kubelet[2736]: E1101 00:26:31.082761 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.082775 kubelet[2736]: W1101 00:26:31.082774 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.082824 kubelet[2736]: E1101 00:26:31.082791 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.083112 kubelet[2736]: E1101 00:26:31.083092 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.083112 kubelet[2736]: W1101 00:26:31.083107 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.083174 kubelet[2736]: E1101 00:26:31.083124 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.083429 kubelet[2736]: E1101 00:26:31.083410 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.083429 kubelet[2736]: W1101 00:26:31.083428 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.083493 kubelet[2736]: E1101 00:26:31.083449 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.083767 kubelet[2736]: E1101 00:26:31.083738 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.083767 kubelet[2736]: W1101 00:26:31.083760 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.083838 kubelet[2736]: E1101 00:26:31.083823 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.084051 kubelet[2736]: E1101 00:26:31.084033 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.084051 kubelet[2736]: W1101 00:26:31.084048 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.084106 kubelet[2736]: E1101 00:26:31.084081 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.084297 kubelet[2736]: E1101 00:26:31.084280 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.084297 kubelet[2736]: W1101 00:26:31.084294 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.084371 kubelet[2736]: E1101 00:26:31.084324 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.084534 kubelet[2736]: E1101 00:26:31.084516 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.084534 kubelet[2736]: W1101 00:26:31.084531 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.084587 kubelet[2736]: E1101 00:26:31.084563 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.084762 kubelet[2736]: E1101 00:26:31.084746 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.084762 kubelet[2736]: W1101 00:26:31.084758 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.084813 kubelet[2736]: E1101 00:26:31.084772 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.085018 kubelet[2736]: E1101 00:26:31.084998 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.085018 kubelet[2736]: W1101 00:26:31.085011 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.085128 kubelet[2736]: E1101 00:26:31.085026 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.085285 kubelet[2736]: E1101 00:26:31.085269 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.085285 kubelet[2736]: W1101 00:26:31.085283 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.085355 kubelet[2736]: E1101 00:26:31.085301 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.085562 kubelet[2736]: E1101 00:26:31.085537 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.085562 kubelet[2736]: W1101 00:26:31.085547 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.085562 kubelet[2736]: E1101 00:26:31.085563 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.085915 kubelet[2736]: E1101 00:26:31.085890 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.085915 kubelet[2736]: W1101 00:26:31.085903 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.086044 kubelet[2736]: E1101 00:26:31.086016 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.086210 kubelet[2736]: E1101 00:26:31.086194 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.086210 kubelet[2736]: W1101 00:26:31.086206 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.086365 kubelet[2736]: E1101 00:26:31.086307 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.086466 kubelet[2736]: E1101 00:26:31.086449 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.086466 kubelet[2736]: W1101 00:26:31.086463 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.086529 kubelet[2736]: E1101 00:26:31.086493 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.086846 kubelet[2736]: E1101 00:26:31.086710 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.086846 kubelet[2736]: W1101 00:26:31.086726 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.086846 kubelet[2736]: E1101 00:26:31.086792 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.087060 kubelet[2736]: E1101 00:26:31.087038 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.087060 kubelet[2736]: W1101 00:26:31.087053 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.087141 kubelet[2736]: E1101 00:26:31.087069 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.087274 kubelet[2736]: E1101 00:26:31.087254 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.087274 kubelet[2736]: W1101 00:26:31.087268 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.087372 kubelet[2736]: E1101 00:26:31.087286 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.087622 kubelet[2736]: E1101 00:26:31.087601 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.087622 kubelet[2736]: W1101 00:26:31.087616 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.087705 kubelet[2736]: E1101 00:26:31.087631 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.087884 kubelet[2736]: E1101 00:26:31.087868 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.087884 kubelet[2736]: W1101 00:26:31.087880 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.087938 kubelet[2736]: E1101 00:26:31.087894 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.088083 kubelet[2736]: E1101 00:26:31.088068 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.088083 kubelet[2736]: W1101 00:26:31.088079 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.088131 kubelet[2736]: E1101 00:26:31.088088 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.088383 kubelet[2736]: E1101 00:26:31.088364 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.088383 kubelet[2736]: W1101 00:26:31.088379 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.088445 kubelet[2736]: E1101 00:26:31.088394 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.099594 kubelet[2736]: E1101 00:26:31.099516 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:31.099594 kubelet[2736]: W1101 00:26:31.099538 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:31.099594 kubelet[2736]: E1101 00:26:31.099557 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:31.102238 containerd[1600]: time="2025-11-01T00:26:31.102123697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:31.102238 containerd[1600]: time="2025-11-01T00:26:31.102193949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:31.102238 containerd[1600]: time="2025-11-01T00:26:31.102207174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:31.102477 containerd[1600]: time="2025-11-01T00:26:31.102312122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:31.142622 containerd[1600]: time="2025-11-01T00:26:31.142467222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9cxd,Uid:168aa87c-0abc-4357-a215-e41bcb47e64b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\"" Nov 1 00:26:31.144950 kubelet[2736]: E1101 00:26:31.144447 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:32.248648 kubelet[2736]: E1101 00:26:32.248583 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:33.863569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002700841.mount: Deactivated successfully. Nov 1 00:26:34.248959 kubelet[2736]: E1101 00:26:34.248784 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:34.292997 containerd[1600]: time="2025-11-01T00:26:34.292933793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:34.293738 containerd[1600]: time="2025-11-01T00:26:34.293672851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:26:34.295026 containerd[1600]: time="2025-11-01T00:26:34.294985617Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:34.297154 containerd[1600]: time="2025-11-01T00:26:34.297125366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:34.297839 containerd[1600]: time="2025-11-01T00:26:34.297808249Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.268007226s" Nov 1 00:26:34.297919 containerd[1600]: time="2025-11-01T00:26:34.297843004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:26:34.301587 containerd[1600]: time="2025-11-01T00:26:34.301546482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:26:34.312494 containerd[1600]: time="2025-11-01T00:26:34.312308969Z" level=info msg="CreateContainer within sandbox \"3fb50093cafe4a0ed19fce23bb5668db44acbb3b182f1a754d6ceb568ef3c6a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:26:34.329605 containerd[1600]: time="2025-11-01T00:26:34.329524318Z" level=info msg="CreateContainer within sandbox \"3fb50093cafe4a0ed19fce23bb5668db44acbb3b182f1a754d6ceb568ef3c6a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a8226d20b19b1631a13982630ba23acce26319c61c5eb534ce20c5f0db6902a1\"" Nov 1 00:26:34.330112 containerd[1600]: time="2025-11-01T00:26:34.330055857Z" level=info msg="StartContainer for \"a8226d20b19b1631a13982630ba23acce26319c61c5eb534ce20c5f0db6902a1\"" Nov 1 00:26:34.417738 containerd[1600]: time="2025-11-01T00:26:34.417653338Z" level=info msg="StartContainer for \"a8226d20b19b1631a13982630ba23acce26319c61c5eb534ce20c5f0db6902a1\" returns successfully" Nov 1 00:26:35.344133 kubelet[2736]: E1101 00:26:35.344092 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:35.403862 kubelet[2736]: E1101 00:26:35.403809 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.403862 kubelet[2736]: W1101 00:26:35.403842 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.403862 kubelet[2736]: E1101 00:26:35.403870 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.404207 kubelet[2736]: E1101 00:26:35.404177 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.404207 kubelet[2736]: W1101 00:26:35.404196 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.404280 kubelet[2736]: E1101 00:26:35.404208 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.404513 kubelet[2736]: E1101 00:26:35.404492 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.404513 kubelet[2736]: W1101 00:26:35.404509 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.404574 kubelet[2736]: E1101 00:26:35.404524 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.404854 kubelet[2736]: E1101 00:26:35.404837 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.404854 kubelet[2736]: W1101 00:26:35.404850 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.404926 kubelet[2736]: E1101 00:26:35.404861 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.405208 kubelet[2736]: E1101 00:26:35.405171 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.405240 kubelet[2736]: W1101 00:26:35.405205 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.405268 kubelet[2736]: E1101 00:26:35.405246 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.405693 kubelet[2736]: E1101 00:26:35.405676 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.405693 kubelet[2736]: W1101 00:26:35.405690 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.405776 kubelet[2736]: E1101 00:26:35.405701 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.405922 kubelet[2736]: E1101 00:26:35.405905 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.405922 kubelet[2736]: W1101 00:26:35.405916 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.405985 kubelet[2736]: E1101 00:26:35.405924 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.406143 kubelet[2736]: E1101 00:26:35.406126 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.406143 kubelet[2736]: W1101 00:26:35.406138 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.406199 kubelet[2736]: E1101 00:26:35.406146 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.406403 kubelet[2736]: E1101 00:26:35.406386 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.406403 kubelet[2736]: W1101 00:26:35.406397 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.406453 kubelet[2736]: E1101 00:26:35.406405 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.406670 kubelet[2736]: E1101 00:26:35.406609 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.406670 kubelet[2736]: W1101 00:26:35.406620 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.406670 kubelet[2736]: E1101 00:26:35.406628 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.406847 kubelet[2736]: E1101 00:26:35.406825 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.406847 kubelet[2736]: W1101 00:26:35.406835 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.406847 kubelet[2736]: E1101 00:26:35.406843 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.407114 kubelet[2736]: E1101 00:26:35.407089 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.407114 kubelet[2736]: W1101 00:26:35.407103 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.407114 kubelet[2736]: E1101 00:26:35.407114 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.407418 kubelet[2736]: E1101 00:26:35.407393 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.407418 kubelet[2736]: W1101 00:26:35.407407 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.407418 kubelet[2736]: E1101 00:26:35.407417 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.407697 kubelet[2736]: E1101 00:26:35.407676 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.407697 kubelet[2736]: W1101 00:26:35.407690 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.407774 kubelet[2736]: E1101 00:26:35.407701 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.407972 kubelet[2736]: E1101 00:26:35.407948 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.407972 kubelet[2736]: W1101 00:26:35.407963 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.407972 kubelet[2736]: E1101 00:26:35.407973 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.414502 kubelet[2736]: E1101 00:26:35.414466 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.414502 kubelet[2736]: W1101 00:26:35.414493 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.414599 kubelet[2736]: E1101 00:26:35.414516 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.414800 kubelet[2736]: E1101 00:26:35.414753 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.414800 kubelet[2736]: W1101 00:26:35.414769 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.414800 kubelet[2736]: E1101 00:26:35.414792 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.415193 kubelet[2736]: E1101 00:26:35.415150 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.415193 kubelet[2736]: W1101 00:26:35.415178 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.415296 kubelet[2736]: E1101 00:26:35.415202 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.415535 kubelet[2736]: E1101 00:26:35.415515 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.415535 kubelet[2736]: W1101 00:26:35.415528 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.415647 kubelet[2736]: E1101 00:26:35.415542 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.415799 kubelet[2736]: E1101 00:26:35.415780 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.415799 kubelet[2736]: W1101 00:26:35.415792 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.415884 kubelet[2736]: E1101 00:26:35.415805 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.416027 kubelet[2736]: E1101 00:26:35.416008 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.416027 kubelet[2736]: W1101 00:26:35.416020 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.416102 kubelet[2736]: E1101 00:26:35.416049 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.416230 kubelet[2736]: E1101 00:26:35.416212 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.416230 kubelet[2736]: W1101 00:26:35.416223 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.416319 kubelet[2736]: E1101 00:26:35.416248 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.416446 kubelet[2736]: E1101 00:26:35.416428 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.416446 kubelet[2736]: W1101 00:26:35.416439 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.416519 kubelet[2736]: E1101 00:26:35.416467 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.416699 kubelet[2736]: E1101 00:26:35.416679 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.416699 kubelet[2736]: W1101 00:26:35.416690 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.416792 kubelet[2736]: E1101 00:26:35.416707 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.417023 kubelet[2736]: E1101 00:26:35.417002 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.417023 kubelet[2736]: W1101 00:26:35.417016 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.417102 kubelet[2736]: E1101 00:26:35.417031 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.417238 kubelet[2736]: E1101 00:26:35.417222 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.417238 kubelet[2736]: W1101 00:26:35.417233 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.417316 kubelet[2736]: E1101 00:26:35.417245 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.417512 kubelet[2736]: E1101 00:26:35.417494 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.417512 kubelet[2736]: W1101 00:26:35.417508 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.417571 kubelet[2736]: E1101 00:26:35.417522 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.417834 kubelet[2736]: E1101 00:26:35.417802 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.417834 kubelet[2736]: W1101 00:26:35.417818 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.417834 kubelet[2736]: E1101 00:26:35.417833 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.418104 kubelet[2736]: E1101 00:26:35.418075 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.418104 kubelet[2736]: W1101 00:26:35.418087 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.418104 kubelet[2736]: E1101 00:26:35.418103 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.418535 kubelet[2736]: E1101 00:26:35.418509 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.418535 kubelet[2736]: W1101 00:26:35.418527 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.418669 kubelet[2736]: E1101 00:26:35.418544 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.418824 kubelet[2736]: E1101 00:26:35.418801 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.418824 kubelet[2736]: W1101 00:26:35.418816 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.418932 kubelet[2736]: E1101 00:26:35.418831 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.419102 kubelet[2736]: E1101 00:26:35.419077 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.419102 kubelet[2736]: W1101 00:26:35.419095 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.419189 kubelet[2736]: E1101 00:26:35.419113 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.419390 kubelet[2736]: E1101 00:26:35.419330 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:35.419390 kubelet[2736]: W1101 00:26:35.419383 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:35.419504 kubelet[2736]: E1101 00:26:35.419395 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:35.559621 kubelet[2736]: I1101 00:26:35.559503 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65b9f84544-bpd74" podStartSLOduration=2.286186566 podStartE2EDuration="5.559474192s" podCreationTimestamp="2025-11-01 00:26:30 +0000 UTC" firstStartedPulling="2025-11-01 00:26:31.028015889 +0000 UTC m=+24.986124587" lastFinishedPulling="2025-11-01 00:26:34.301303515 +0000 UTC m=+28.259412213" observedRunningTime="2025-11-01 00:26:35.549960972 +0000 UTC m=+29.508069700" watchObservedRunningTime="2025-11-01 00:26:35.559474192 +0000 UTC m=+29.517582910" Nov 1 00:26:36.248946 kubelet[2736]: E1101 00:26:36.248852 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:36.341802 kubelet[2736]: I1101 00:26:36.341761 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:26:36.342150 kubelet[2736]: E1101 00:26:36.342128 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:36.415141 kubelet[2736]: E1101 00:26:36.415086 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.415141 kubelet[2736]: W1101 00:26:36.415126 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.415141 kubelet[2736]: E1101 00:26:36.415152 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.415758 kubelet[2736]: E1101 00:26:36.415410 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.415758 kubelet[2736]: W1101 00:26:36.415422 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.415758 kubelet[2736]: E1101 00:26:36.415433 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.415758 kubelet[2736]: E1101 00:26:36.415679 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.415758 kubelet[2736]: W1101 00:26:36.415688 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.415758 kubelet[2736]: E1101 00:26:36.415705 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.416038 kubelet[2736]: E1101 00:26:36.415919 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.416038 kubelet[2736]: W1101 00:26:36.415930 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.416038 kubelet[2736]: E1101 00:26:36.415942 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.416163 kubelet[2736]: E1101 00:26:36.416146 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.416163 kubelet[2736]: W1101 00:26:36.416158 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.416163 kubelet[2736]: E1101 00:26:36.416166 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.416450 kubelet[2736]: E1101 00:26:36.416422 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.416450 kubelet[2736]: W1101 00:26:36.416444 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.416519 kubelet[2736]: E1101 00:26:36.416460 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.416702 kubelet[2736]: E1101 00:26:36.416684 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.416702 kubelet[2736]: W1101 00:26:36.416699 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.416796 kubelet[2736]: E1101 00:26:36.416713 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.416987 kubelet[2736]: E1101 00:26:36.416969 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.416987 kubelet[2736]: W1101 00:26:36.416985 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.417082 kubelet[2736]: E1101 00:26:36.416997 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.417318 kubelet[2736]: E1101 00:26:36.417280 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.417318 kubelet[2736]: W1101 00:26:36.417296 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.417318 kubelet[2736]: E1101 00:26:36.417309 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.417600 kubelet[2736]: E1101 00:26:36.417580 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.417600 kubelet[2736]: W1101 00:26:36.417596 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.417600 kubelet[2736]: E1101 00:26:36.417609 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.418047 kubelet[2736]: E1101 00:26:36.417854 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.418047 kubelet[2736]: W1101 00:26:36.417871 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.418047 kubelet[2736]: E1101 00:26:36.417884 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.418216 kubelet[2736]: E1101 00:26:36.418202 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.418216 kubelet[2736]: W1101 00:26:36.418214 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.418291 kubelet[2736]: E1101 00:26:36.418227 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.418527 kubelet[2736]: E1101 00:26:36.418512 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.418527 kubelet[2736]: W1101 00:26:36.418526 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.418630 kubelet[2736]: E1101 00:26:36.418536 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.418784 kubelet[2736]: E1101 00:26:36.418766 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.418784 kubelet[2736]: W1101 00:26:36.418780 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.418912 kubelet[2736]: E1101 00:26:36.418797 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.419085 kubelet[2736]: E1101 00:26:36.419068 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.419085 kubelet[2736]: W1101 00:26:36.419080 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.419085 kubelet[2736]: E1101 00:26:36.419087 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.423280 kubelet[2736]: E1101 00:26:36.423242 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.423280 kubelet[2736]: W1101 00:26:36.423278 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.423475 kubelet[2736]: E1101 00:26:36.423290 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.423678 kubelet[2736]: E1101 00:26:36.423649 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.423859 kubelet[2736]: W1101 00:26:36.423789 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.423859 kubelet[2736]: E1101 00:26:36.423822 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.424346 kubelet[2736]: E1101 00:26:36.424313 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.424346 kubelet[2736]: W1101 00:26:36.424329 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.424445 kubelet[2736]: E1101 00:26:36.424363 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.424730 kubelet[2736]: E1101 00:26:36.424623 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.424730 kubelet[2736]: W1101 00:26:36.424640 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.424730 kubelet[2736]: E1101 00:26:36.424654 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.425286 kubelet[2736]: E1101 00:26:36.425261 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.425286 kubelet[2736]: W1101 00:26:36.425279 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.425392 kubelet[2736]: E1101 00:26:36.425291 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.426010 kubelet[2736]: E1101 00:26:36.425571 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.426010 kubelet[2736]: W1101 00:26:36.425585 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.426010 kubelet[2736]: E1101 00:26:36.425597 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.429849 kubelet[2736]: E1101 00:26:36.429812 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.429849 kubelet[2736]: W1101 00:26:36.429845 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.430004 kubelet[2736]: E1101 00:26:36.429976 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.430256 kubelet[2736]: E1101 00:26:36.430241 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.430256 kubelet[2736]: W1101 00:26:36.430253 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.430789 kubelet[2736]: E1101 00:26:36.430647 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.430789 kubelet[2736]: E1101 00:26:36.430712 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.430789 kubelet[2736]: W1101 00:26:36.430740 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.430789 kubelet[2736]: E1101 00:26:36.430757 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.432555 kubelet[2736]: E1101 00:26:36.432541 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.432651 kubelet[2736]: W1101 00:26:36.432638 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.433100 kubelet[2736]: E1101 00:26:36.433013 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.433287 kubelet[2736]: E1101 00:26:36.433274 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.433440 kubelet[2736]: W1101 00:26:36.433356 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.433440 kubelet[2736]: E1101 00:26:36.433429 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.433739 kubelet[2736]: E1101 00:26:36.433705 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.433739 kubelet[2736]: W1101 00:26:36.433716 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.433891 kubelet[2736]: E1101 00:26:36.433830 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.434248 kubelet[2736]: E1101 00:26:36.434137 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.434248 kubelet[2736]: W1101 00:26:36.434158 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.434248 kubelet[2736]: E1101 00:26:36.434172 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.434579 kubelet[2736]: E1101 00:26:36.434487 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.434579 kubelet[2736]: W1101 00:26:36.434498 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.434579 kubelet[2736]: E1101 00:26:36.434520 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.434878 kubelet[2736]: E1101 00:26:36.434860 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.434878 kubelet[2736]: W1101 00:26:36.434872 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.435017 kubelet[2736]: E1101 00:26:36.434990 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.435059 kubelet[2736]: E1101 00:26:36.435053 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.435090 kubelet[2736]: W1101 00:26:36.435061 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.435090 kubelet[2736]: E1101 00:26:36.435070 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.435486 kubelet[2736]: E1101 00:26:36.435468 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.435698 kubelet[2736]: W1101 00:26:36.435586 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.435698 kubelet[2736]: E1101 00:26:36.435613 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.436053 kubelet[2736]: E1101 00:26:36.435991 2736 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:26:36.436053 kubelet[2736]: W1101 00:26:36.436005 2736 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:26:36.436053 kubelet[2736]: E1101 00:26:36.436017 2736 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:26:36.587725 containerd[1600]: time="2025-11-01T00:26:36.587574253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:36.590163 containerd[1600]: time="2025-11-01T00:26:36.590056876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:26:36.591453 containerd[1600]: time="2025-11-01T00:26:36.591388998Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:36.593959 containerd[1600]: time="2025-11-01T00:26:36.593909273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:36.594725 containerd[1600]: time="2025-11-01T00:26:36.594667286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.293081311s" Nov 1 00:26:36.594725 containerd[1600]: time="2025-11-01T00:26:36.594704236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:26:36.597315 containerd[1600]: time="2025-11-01T00:26:36.597275195Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:26:36.619611 containerd[1600]: time="2025-11-01T00:26:36.619566706Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6\"" Nov 1 00:26:36.620217 containerd[1600]: time="2025-11-01T00:26:36.620166002Z" level=info msg="StartContainer for \"7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6\"" Nov 1 00:26:36.695705 containerd[1600]: time="2025-11-01T00:26:36.695631213Z" level=info msg="StartContainer for \"7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6\" returns successfully" Nov 1 00:26:36.733673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6-rootfs.mount: Deactivated successfully. Nov 1 00:26:37.248131 containerd[1600]: time="2025-11-01T00:26:37.246404429Z" level=info msg="shim disconnected" id=7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6 namespace=k8s.io Nov 1 00:26:37.248131 containerd[1600]: time="2025-11-01T00:26:37.248127927Z" level=warning msg="cleaning up after shim disconnected" id=7606e2813cdfe964b2cbce39014d7fc2ad87ac815d79882c31f9e9c80c4d50b6 namespace=k8s.io Nov 1 00:26:37.248131 containerd[1600]: time="2025-11-01T00:26:37.248140330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:26:37.347292 kubelet[2736]: E1101 00:26:37.346872 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:37.348765 containerd[1600]: time="2025-11-01T00:26:37.348696775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:26:38.248921 kubelet[2736]: E1101 00:26:38.248826 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:40.348779 kubelet[2736]: E1101 00:26:40.348674 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:40.407240 containerd[1600]: time="2025-11-01T00:26:40.407162646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:40.408322 containerd[1600]: time="2025-11-01T00:26:40.408263243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:26:40.409519 containerd[1600]: time="2025-11-01T00:26:40.409462196Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:40.412020 containerd[1600]: time="2025-11-01T00:26:40.411946891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:40.412577 containerd[1600]: time="2025-11-01T00:26:40.412543763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.063790703s" Nov 1 00:26:40.412644 containerd[1600]: time="2025-11-01T00:26:40.412573358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:26:40.415974 containerd[1600]: time="2025-11-01T00:26:40.415947115Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:26:40.433966 containerd[1600]: time="2025-11-01T00:26:40.433910222Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720\"" Nov 1 00:26:40.437653 containerd[1600]: time="2025-11-01T00:26:40.437581638Z" level=info msg="StartContainer for \"09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720\"" Nov 1 00:26:40.508527 containerd[1600]: time="2025-11-01T00:26:40.508311117Z" level=info msg="StartContainer for \"09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720\" returns successfully" Nov 1 00:26:40.714885 kubelet[2736]: I1101 00:26:40.714358 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:26:40.714885 kubelet[2736]: E1101 00:26:40.714831 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.570523 kubelet[2736]: E1101 00:26:41.570474 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:41.571772 kubelet[2736]: E1101 00:26:41.571499 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:42.214356 systemd-journald[1167]: Under memory pressure, flushing caches. Nov 1 00:26:42.212490 systemd-resolved[1478]: Under memory pressure, flushing caches. Nov 1 00:26:42.212570 systemd-resolved[1478]: Flushed all caches. Nov 1 00:26:42.240804 containerd[1600]: time="2025-11-01T00:26:42.240733528Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:26:42.249797 kubelet[2736]: E1101 00:26:42.249752 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:42.272305 kubelet[2736]: I1101 00:26:42.271880 2736 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:26:42.288456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720-rootfs.mount: Deactivated successfully. Nov 1 00:26:42.295032 containerd[1600]: time="2025-11-01T00:26:42.294893803Z" level=info msg="shim disconnected" id=09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720 namespace=k8s.io Nov 1 00:26:42.295198 containerd[1600]: time="2025-11-01T00:26:42.295023196Z" level=warning msg="cleaning up after shim disconnected" id=09c3bfd75e6c8f3e7c439641bcb48f7ab3477d85e8e0a57bd070c94dbd688720 namespace=k8s.io Nov 1 00:26:42.295198 containerd[1600]: time="2025-11-01T00:26:42.295059534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:26:42.358248 kubelet[2736]: I1101 00:26:42.357964 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26210ce5-453a-47fe-b5c4-bb7d1e50d30b-tigera-ca-bundle\") pod \"calico-kube-controllers-77dccc7d57-zdj9g\" (UID: \"26210ce5-453a-47fe-b5c4-bb7d1e50d30b\") " pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" Nov 1 00:26:42.364866 kubelet[2736]: I1101 00:26:42.358513 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29f73971-767b-4aac-baa4-25b13c4b42ec-config-volume\") pod \"coredns-668d6bf9bc-ghftw\" (UID: \"29f73971-767b-4aac-baa4-25b13c4b42ec\") " pod="kube-system/coredns-668d6bf9bc-ghftw" Nov 1 00:26:42.364866 kubelet[2736]: I1101 00:26:42.361718 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/151d3855-7594-4722-a64f-ba8ae7061d01-calico-apiserver-certs\") pod \"calico-apiserver-599794c67d-92gsc\" (UID: \"151d3855-7594-4722-a64f-ba8ae7061d01\") " pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" Nov 1 00:26:42.364866 kubelet[2736]: I1101 00:26:42.362490 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56b8c\" (UniqueName: \"kubernetes.io/projected/e7c7365e-fed9-44a2-bb07-9942249f952b-kube-api-access-56b8c\") pod \"goldmane-666569f655-vv2f4\" (UID: \"e7c7365e-fed9-44a2-bb07-9942249f952b\") " pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.364866 kubelet[2736]: I1101 00:26:42.362554 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgfc\" (UniqueName: \"kubernetes.io/projected/3faeb474-0035-403d-ae2d-10880fff0c9f-kube-api-access-drgfc\") pod \"whisker-5fc5d9467-nfltp\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " pod="calico-system/whisker-5fc5d9467-nfltp" Nov 1 00:26:42.364866 kubelet[2736]: I1101 00:26:42.362614 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtr5\" (UniqueName: \"kubernetes.io/projected/26210ce5-453a-47fe-b5c4-bb7d1e50d30b-kube-api-access-jxtr5\") pod \"calico-kube-controllers-77dccc7d57-zdj9g\" (UID: \"26210ce5-453a-47fe-b5c4-bb7d1e50d30b\") " pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" Nov 1 00:26:42.365561 kubelet[2736]: I1101 00:26:42.362699 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-backend-key-pair\") pod \"whisker-5fc5d9467-nfltp\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " pod="calico-system/whisker-5fc5d9467-nfltp" Nov 1 00:26:42.365561 kubelet[2736]: I1101 00:26:42.362759 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18da393d-9f84-487e-a8ed-8cbdbb46de00-calico-apiserver-certs\") pod \"calico-apiserver-599794c67d-gvjds\" (UID: \"18da393d-9f84-487e-a8ed-8cbdbb46de00\") " pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" Nov 1 00:26:42.365561 kubelet[2736]: I1101 00:26:42.362806 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mp69\" (UniqueName: \"kubernetes.io/projected/151d3855-7594-4722-a64f-ba8ae7061d01-kube-api-access-7mp69\") pod \"calico-apiserver-599794c67d-92gsc\" (UID: \"151d3855-7594-4722-a64f-ba8ae7061d01\") " pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" Nov 1 00:26:42.365561 kubelet[2736]: I1101 00:26:42.362869 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-ca-bundle\") pod \"whisker-5fc5d9467-nfltp\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " pod="calico-system/whisker-5fc5d9467-nfltp" Nov 1 00:26:42.365561 kubelet[2736]: I1101 00:26:42.362951 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjns\" (UniqueName: \"kubernetes.io/projected/18da393d-9f84-487e-a8ed-8cbdbb46de00-kube-api-access-vgjns\") pod \"calico-apiserver-599794c67d-gvjds\" (UID: \"18da393d-9f84-487e-a8ed-8cbdbb46de00\") " pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" Nov 1 00:26:42.365929 kubelet[2736]: I1101 00:26:42.363046 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcgq\" (UniqueName: \"kubernetes.io/projected/29f73971-767b-4aac-baa4-25b13c4b42ec-kube-api-access-7jcgq\") pod \"coredns-668d6bf9bc-ghftw\" (UID: \"29f73971-767b-4aac-baa4-25b13c4b42ec\") " pod="kube-system/coredns-668d6bf9bc-ghftw" Nov 1 00:26:42.365929 kubelet[2736]: I1101 00:26:42.363108 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7c7365e-fed9-44a2-bb07-9942249f952b-goldmane-ca-bundle\") pod \"goldmane-666569f655-vv2f4\" (UID: \"e7c7365e-fed9-44a2-bb07-9942249f952b\") " pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.365929 kubelet[2736]: I1101 00:26:42.363172 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e7c7365e-fed9-44a2-bb07-9942249f952b-goldmane-key-pair\") pod \"goldmane-666569f655-vv2f4\" (UID: \"e7c7365e-fed9-44a2-bb07-9942249f952b\") " pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.365929 kubelet[2736]: I1101 00:26:42.363228 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30524ccc-9256-4e38-a18e-44025e0e57e8-config-volume\") pod \"coredns-668d6bf9bc-qjdb4\" (UID: \"30524ccc-9256-4e38-a18e-44025e0e57e8\") " pod="kube-system/coredns-668d6bf9bc-qjdb4" Nov 1 00:26:42.365929 kubelet[2736]: I1101 00:26:42.363308 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28r4g\" (UniqueName: \"kubernetes.io/projected/30524ccc-9256-4e38-a18e-44025e0e57e8-kube-api-access-28r4g\") pod \"coredns-668d6bf9bc-qjdb4\" (UID: \"30524ccc-9256-4e38-a18e-44025e0e57e8\") " pod="kube-system/coredns-668d6bf9bc-qjdb4" Nov 1 00:26:42.371057 kubelet[2736]: I1101 00:26:42.370506 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7c7365e-fed9-44a2-bb07-9942249f952b-config\") pod \"goldmane-666569f655-vv2f4\" (UID: \"e7c7365e-fed9-44a2-bb07-9942249f952b\") " pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.576284 kubelet[2736]: E1101 00:26:42.576138 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:42.577700 containerd[1600]: time="2025-11-01T00:26:42.577597887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:26:42.643957 kubelet[2736]: E1101 00:26:42.643915 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:42.644714 containerd[1600]: time="2025-11-01T00:26:42.644651958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjdb4,Uid:30524ccc-9256-4e38-a18e-44025e0e57e8,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:42.652252 containerd[1600]: time="2025-11-01T00:26:42.652211435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vv2f4,Uid:e7c7365e-fed9-44a2-bb07-9942249f952b,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:42.672129 kubelet[2736]: E1101 00:26:42.671773 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:42.673258 containerd[1600]: time="2025-11-01T00:26:42.673218878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghftw,Uid:29f73971-767b-4aac-baa4-25b13c4b42ec,Namespace:kube-system,Attempt:0,}" Nov 1 00:26:42.682130 containerd[1600]: time="2025-11-01T00:26:42.681905252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-gvjds,Uid:18da393d-9f84-487e-a8ed-8cbdbb46de00,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:26:42.682982 containerd[1600]: time="2025-11-01T00:26:42.682926710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc5d9467-nfltp,Uid:3faeb474-0035-403d-ae2d-10880fff0c9f,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:42.686053 containerd[1600]: time="2025-11-01T00:26:42.685999681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-92gsc,Uid:151d3855-7594-4722-a64f-ba8ae7061d01,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:26:42.686354 containerd[1600]: time="2025-11-01T00:26:42.686297771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77dccc7d57-zdj9g,Uid:26210ce5-453a-47fe-b5c4-bb7d1e50d30b,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:42.907714 containerd[1600]: time="2025-11-01T00:26:42.906993620Z" level=error msg="Failed to destroy network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.919574 containerd[1600]: time="2025-11-01T00:26:42.919511690Z" level=error msg="Failed to destroy network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.926619 containerd[1600]: time="2025-11-01T00:26:42.926068032Z" level=error msg="encountered an error cleaning up failed sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.926619 containerd[1600]: time="2025-11-01T00:26:42.926540330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vv2f4,Uid:e7c7365e-fed9-44a2-bb07-9942249f952b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.931540 containerd[1600]: time="2025-11-01T00:26:42.931367385Z" level=error msg="encountered an error cleaning up failed sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.931540 containerd[1600]: time="2025-11-01T00:26:42.931459748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjdb4,Uid:30524ccc-9256-4e38-a18e-44025e0e57e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.956708 containerd[1600]: time="2025-11-01T00:26:42.956598370Z" level=error msg="Failed to destroy network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.961670 containerd[1600]: time="2025-11-01T00:26:42.961229147Z" level=error msg="encountered an error cleaning up failed sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.961670 containerd[1600]: time="2025-11-01T00:26:42.961605734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-gvjds,Uid:18da393d-9f84-487e-a8ed-8cbdbb46de00,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.976127 containerd[1600]: time="2025-11-01T00:26:42.975595978Z" level=error msg="Failed to destroy network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.976327 containerd[1600]: time="2025-11-01T00:26:42.976279262Z" level=error msg="encountered an error cleaning up failed sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.976493 containerd[1600]: time="2025-11-01T00:26:42.976449551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77dccc7d57-zdj9g,Uid:26210ce5-453a-47fe-b5c4-bb7d1e50d30b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.976674 kubelet[2736]: E1101 00:26:42.976608 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.976767 kubelet[2736]: E1101 00:26:42.976728 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.976813 kubelet[2736]: E1101 00:26:42.976772 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vv2f4" Nov 1 00:26:42.976879 kubelet[2736]: E1101 00:26:42.976834 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vv2f4_calico-system(e7c7365e-fed9-44a2-bb07-9942249f952b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vv2f4_calico-system(e7c7365e-fed9-44a2-bb07-9942249f952b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:26:42.978486 kubelet[2736]: E1101 00:26:42.978398 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.978561 kubelet[2736]: E1101 00:26:42.978528 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" Nov 1 00:26:42.978615 kubelet[2736]: E1101 00:26:42.978567 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" Nov 1 00:26:42.978667 kubelet[2736]: E1101 00:26:42.978622 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599794c67d-gvjds_calico-apiserver(18da393d-9f84-487e-a8ed-8cbdbb46de00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599794c67d-gvjds_calico-apiserver(18da393d-9f84-487e-a8ed-8cbdbb46de00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:26:42.978745 kubelet[2736]: E1101 00:26:42.978690 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.978745 kubelet[2736]: E1101 00:26:42.978716 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qjdb4" Nov 1 00:26:42.978745 kubelet[2736]: E1101 00:26:42.978734 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qjdb4" Nov 1 00:26:42.978920 kubelet[2736]: E1101 00:26:42.978757 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qjdb4_kube-system(30524ccc-9256-4e38-a18e-44025e0e57e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qjdb4_kube-system(30524ccc-9256-4e38-a18e-44025e0e57e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qjdb4" podUID="30524ccc-9256-4e38-a18e-44025e0e57e8" Nov 1 00:26:42.978920 kubelet[2736]: E1101 00:26:42.978859 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.978920 kubelet[2736]: E1101 00:26:42.978889 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" Nov 1 00:26:42.979050 kubelet[2736]: E1101 00:26:42.978902 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" Nov 1 00:26:42.979050 kubelet[2736]: E1101 00:26:42.978924 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77dccc7d57-zdj9g_calico-system(26210ce5-453a-47fe-b5c4-bb7d1e50d30b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77dccc7d57-zdj9g_calico-system(26210ce5-453a-47fe-b5c4-bb7d1e50d30b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:26:42.985387 containerd[1600]: time="2025-11-01T00:26:42.983469104Z" level=error msg="Failed to destroy network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.985387 containerd[1600]: time="2025-11-01T00:26:42.984010131Z" level=error msg="encountered an error cleaning up failed sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.985387 containerd[1600]: time="2025-11-01T00:26:42.984082687Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-92gsc,Uid:151d3855-7594-4722-a64f-ba8ae7061d01,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.985761 kubelet[2736]: E1101 00:26:42.984641 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.985761 kubelet[2736]: E1101 00:26:42.984790 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" Nov 1 00:26:42.985761 kubelet[2736]: E1101 00:26:42.984843 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" Nov 1 00:26:42.985933 kubelet[2736]: E1101 00:26:42.984898 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-599794c67d-92gsc_calico-apiserver(151d3855-7594-4722-a64f-ba8ae7061d01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-599794c67d-92gsc_calico-apiserver(151d3855-7594-4722-a64f-ba8ae7061d01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:26:42.991571 containerd[1600]: time="2025-11-01T00:26:42.991517049Z" level=error msg="Failed to destroy network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.994914 containerd[1600]: time="2025-11-01T00:26:42.994828869Z" level=error msg="encountered an error cleaning up failed sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.995020 containerd[1600]: time="2025-11-01T00:26:42.994979843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc5d9467-nfltp,Uid:3faeb474-0035-403d-ae2d-10880fff0c9f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.995482 kubelet[2736]: E1101 00:26:42.995371 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:42.995482 kubelet[2736]: E1101 00:26:42.995517 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fc5d9467-nfltp" Nov 1 00:26:42.995482 kubelet[2736]: E1101 00:26:42.995555 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fc5d9467-nfltp" Nov 1 00:26:42.996101 kubelet[2736]: E1101 00:26:42.995647 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fc5d9467-nfltp_calico-system(3faeb474-0035-403d-ae2d-10880fff0c9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fc5d9467-nfltp_calico-system(3faeb474-0035-403d-ae2d-10880fff0c9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fc5d9467-nfltp" podUID="3faeb474-0035-403d-ae2d-10880fff0c9f" Nov 1 00:26:43.015533 containerd[1600]: time="2025-11-01T00:26:43.015441299Z" level=error msg="Failed to destroy network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.016232 containerd[1600]: time="2025-11-01T00:26:43.016178774Z" level=error msg="encountered an error cleaning up failed sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.016308 containerd[1600]: time="2025-11-01T00:26:43.016258694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghftw,Uid:29f73971-767b-4aac-baa4-25b13c4b42ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.017592 kubelet[2736]: E1101 00:26:43.017519 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.017703 kubelet[2736]: E1101 00:26:43.017644 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ghftw" Nov 1 00:26:43.017703 kubelet[2736]: E1101 00:26:43.017680 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ghftw" Nov 1 00:26:43.017784 kubelet[2736]: E1101 00:26:43.017755 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ghftw_kube-system(29f73971-767b-4aac-baa4-25b13c4b42ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ghftw_kube-system(29f73971-767b-4aac-baa4-25b13c4b42ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ghftw" podUID="29f73971-767b-4aac-baa4-25b13c4b42ec" Nov 1 00:26:43.288170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a-shm.mount: Deactivated successfully. Nov 1 00:26:43.579154 kubelet[2736]: I1101 00:26:43.579012 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:43.582731 kubelet[2736]: I1101 00:26:43.582698 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:43.584227 containerd[1600]: time="2025-11-01T00:26:43.584174649Z" level=info msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" Nov 1 00:26:43.584655 containerd[1600]: time="2025-11-01T00:26:43.584452771Z" level=info msg="Ensure that sandbox 284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e in task-service has been cleanup successfully" Nov 1 00:26:43.585185 containerd[1600]: time="2025-11-01T00:26:43.584945085Z" level=info msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" Nov 1 00:26:43.585247 kubelet[2736]: I1101 00:26:43.585080 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:43.585745 containerd[1600]: time="2025-11-01T00:26:43.585712687Z" level=info msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" Nov 1 00:26:43.585932 containerd[1600]: time="2025-11-01T00:26:43.585907373Z" level=info msg="Ensure that sandbox 388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a in task-service has been cleanup successfully" Nov 1 00:26:43.589836 kubelet[2736]: I1101 00:26:43.589802 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:43.590692 containerd[1600]: time="2025-11-01T00:26:43.590557576Z" level=info msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" Nov 1 00:26:43.590949 containerd[1600]: time="2025-11-01T00:26:43.590907002Z" level=info msg="Ensure that sandbox 6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784 in task-service has been cleanup successfully" Nov 1 00:26:43.593814 kubelet[2736]: I1101 00:26:43.593783 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:43.595568 containerd[1600]: time="2025-11-01T00:26:43.595511359Z" level=info msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" Nov 1 00:26:43.597629 containerd[1600]: time="2025-11-01T00:26:43.596425456Z" level=info msg="Ensure that sandbox b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8 in task-service has been cleanup successfully" Nov 1 00:26:43.599083 kubelet[2736]: I1101 00:26:43.598940 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:43.600040 containerd[1600]: time="2025-11-01T00:26:43.599771220Z" level=info msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" Nov 1 00:26:43.601832 containerd[1600]: time="2025-11-01T00:26:43.600952738Z" level=info msg="Ensure that sandbox ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68 in task-service has been cleanup successfully" Nov 1 00:26:43.603041 kubelet[2736]: I1101 00:26:43.601352 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:43.603109 containerd[1600]: time="2025-11-01T00:26:43.601935143Z" level=info msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" Nov 1 00:26:43.603109 containerd[1600]: time="2025-11-01T00:26:43.602105503Z" level=info msg="Ensure that sandbox 3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6 in task-service has been cleanup successfully" Nov 1 00:26:43.603109 containerd[1600]: time="2025-11-01T00:26:43.602415496Z" level=info msg="Ensure that sandbox 2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a in task-service has been cleanup successfully" Nov 1 00:26:43.660534 containerd[1600]: time="2025-11-01T00:26:43.659931101Z" level=error msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" failed" error="failed to destroy network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.660714 kubelet[2736]: E1101 00:26:43.660228 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:43.660714 kubelet[2736]: E1101 00:26:43.660325 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6"} Nov 1 00:26:43.660714 kubelet[2736]: E1101 00:26:43.660439 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7c7365e-fed9-44a2-bb07-9942249f952b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.660714 kubelet[2736]: E1101 00:26:43.660477 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7c7365e-fed9-44a2-bb07-9942249f952b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:26:43.660973 containerd[1600]: time="2025-11-01T00:26:43.660941427Z" level=error msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" failed" error="failed to destroy network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.661469 kubelet[2736]: E1101 00:26:43.661224 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:43.661469 kubelet[2736]: E1101 00:26:43.661303 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784"} Nov 1 00:26:43.661469 kubelet[2736]: E1101 00:26:43.661364 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29f73971-767b-4aac-baa4-25b13c4b42ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.661469 kubelet[2736]: E1101 00:26:43.661390 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29f73971-767b-4aac-baa4-25b13c4b42ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ghftw" podUID="29f73971-767b-4aac-baa4-25b13c4b42ec" Nov 1 00:26:43.664612 containerd[1600]: time="2025-11-01T00:26:43.664516181Z" level=error msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" failed" error="failed to destroy network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.664979 kubelet[2736]: E1101 00:26:43.664805 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:43.664979 kubelet[2736]: E1101 00:26:43.664862 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a"} Nov 1 00:26:43.664979 kubelet[2736]: E1101 00:26:43.664896 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18da393d-9f84-487e-a8ed-8cbdbb46de00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.664979 kubelet[2736]: E1101 00:26:43.664936 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18da393d-9f84-487e-a8ed-8cbdbb46de00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:26:43.674366 containerd[1600]: time="2025-11-01T00:26:43.674055336Z" level=error msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" failed" error="failed to destroy network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.674366 containerd[1600]: time="2025-11-01T00:26:43.674247156Z" level=error msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" failed" error="failed to destroy network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.675060 kubelet[2736]: E1101 00:26:43.674707 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:43.675060 kubelet[2736]: E1101 00:26:43.674778 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a"} Nov 1 00:26:43.675060 kubelet[2736]: E1101 00:26:43.674820 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30524ccc-9256-4e38-a18e-44025e0e57e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.675060 kubelet[2736]: E1101 00:26:43.674857 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30524ccc-9256-4e38-a18e-44025e0e57e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qjdb4" podUID="30524ccc-9256-4e38-a18e-44025e0e57e8" Nov 1 00:26:43.675358 kubelet[2736]: E1101 00:26:43.674903 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:43.675358 kubelet[2736]: E1101 00:26:43.674924 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e"} Nov 1 00:26:43.675358 kubelet[2736]: E1101 00:26:43.674967 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3faeb474-0035-403d-ae2d-10880fff0c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.675358 kubelet[2736]: E1101 00:26:43.674993 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3faeb474-0035-403d-ae2d-10880fff0c9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fc5d9467-nfltp" podUID="3faeb474-0035-403d-ae2d-10880fff0c9f" Nov 1 00:26:43.679306 containerd[1600]: time="2025-11-01T00:26:43.678925862Z" level=error msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" failed" error="failed to destroy network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.679379 kubelet[2736]: E1101 00:26:43.679196 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:43.679379 kubelet[2736]: E1101 00:26:43.679228 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68"} Nov 1 00:26:43.679379 kubelet[2736]: E1101 00:26:43.679264 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"151d3855-7594-4722-a64f-ba8ae7061d01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.679379 kubelet[2736]: E1101 00:26:43.679284 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"151d3855-7594-4722-a64f-ba8ae7061d01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:26:43.694017 containerd[1600]: time="2025-11-01T00:26:43.693939448Z" level=error msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" failed" error="failed to destroy network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:43.694278 kubelet[2736]: E1101 00:26:43.694230 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:43.694364 kubelet[2736]: E1101 00:26:43.694287 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8"} Nov 1 00:26:43.694431 kubelet[2736]: E1101 00:26:43.694331 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26210ce5-453a-47fe-b5c4-bb7d1e50d30b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:43.694431 kubelet[2736]: E1101 00:26:43.694408 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26210ce5-453a-47fe-b5c4-bb7d1e50d30b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:26:44.259969 containerd[1600]: time="2025-11-01T00:26:44.259907073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhvvn,Uid:f97e1baa-80d7-4279-b761-fdf55a406885,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:44.261463 systemd-resolved[1478]: Under memory pressure, flushing caches. Nov 1 00:26:44.265626 systemd-journald[1167]: Under memory pressure, flushing caches. Nov 1 00:26:44.261503 systemd-resolved[1478]: Flushed all caches. Nov 1 00:26:46.430586 containerd[1600]: time="2025-11-01T00:26:46.427548264Z" level=error msg="Failed to destroy network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:46.430586 containerd[1600]: time="2025-11-01T00:26:46.428088258Z" level=error msg="encountered an error cleaning up failed sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:46.430586 containerd[1600]: time="2025-11-01T00:26:46.428136810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhvvn,Uid:f97e1baa-80d7-4279-b761-fdf55a406885,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:46.431294 kubelet[2736]: E1101 00:26:46.428461 2736 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:46.431294 kubelet[2736]: E1101 00:26:46.428532 2736 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:46.431294 kubelet[2736]: E1101 00:26:46.428553 2736 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lhvvn" Nov 1 00:26:46.431813 kubelet[2736]: E1101 00:26:46.428605 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:46.435448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451-shm.mount: Deactivated successfully. Nov 1 00:26:46.613783 kubelet[2736]: I1101 00:26:46.613736 2736 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:26:46.615725 containerd[1600]: time="2025-11-01T00:26:46.615680870Z" level=info msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" Nov 1 00:26:46.617377 containerd[1600]: time="2025-11-01T00:26:46.615886375Z" level=info msg="Ensure that sandbox 10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451 in task-service has been cleanup successfully" Nov 1 00:26:46.653776 containerd[1600]: time="2025-11-01T00:26:46.653716517Z" level=error msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" failed" error="failed to destroy network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:26:46.654054 kubelet[2736]: E1101 00:26:46.653984 2736 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:26:46.654121 kubelet[2736]: E1101 00:26:46.654068 2736 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451"} Nov 1 00:26:46.654121 kubelet[2736]: E1101 00:26:46.654108 2736 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f97e1baa-80d7-4279-b761-fdf55a406885\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:26:46.654233 kubelet[2736]: E1101 00:26:46.654134 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f97e1baa-80d7-4279-b761-fdf55a406885\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:26:48.276424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674600639.mount: Deactivated successfully. Nov 1 00:26:48.780155 containerd[1600]: time="2025-11-01T00:26:48.779399441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:48.781015 containerd[1600]: time="2025-11-01T00:26:48.780210954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:26:48.784184 containerd[1600]: time="2025-11-01T00:26:48.784102102Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:48.792081 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:44034.service - OpenSSH per-connection server daemon (10.0.0.1:44034). Nov 1 00:26:48.798548 containerd[1600]: time="2025-11-01T00:26:48.798503966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:26:48.800513 containerd[1600]: time="2025-11-01T00:26:48.799330989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.221647541s" Nov 1 00:26:48.800513 containerd[1600]: time="2025-11-01T00:26:48.799378027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:26:48.817559 containerd[1600]: time="2025-11-01T00:26:48.817496812Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:26:48.835985 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 44034 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:26:48.838606 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:48.840581 containerd[1600]: time="2025-11-01T00:26:48.840537688Z" level=info msg="CreateContainer within sandbox \"c78ca9f0b522f786740e97db61fa42ba3357c7931e54a9eba97ed6128af5e6a5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eb7821519548284175fb058e5fd85aaa200dd0724b646f1c4b4c6109a80be618\"" Nov 1 00:26:48.842435 containerd[1600]: time="2025-11-01T00:26:48.841475029Z" level=info msg="StartContainer for \"eb7821519548284175fb058e5fd85aaa200dd0724b646f1c4b4c6109a80be618\"" Nov 1 00:26:48.844195 systemd-logind[1574]: New session 10 of user core. Nov 1 00:26:48.854767 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:26:49.124995 containerd[1600]: time="2025-11-01T00:26:49.124941411Z" level=info msg="StartContainer for \"eb7821519548284175fb058e5fd85aaa200dd0724b646f1c4b4c6109a80be618\" returns successfully" Nov 1 00:26:49.145008 sshd[3951]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:49.151396 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:26:49.151504 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:26:49.151051 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:44034.service: Deactivated successfully. Nov 1 00:26:49.155721 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:26:49.156881 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:26:49.159216 systemd-logind[1574]: Removed session 10. Nov 1 00:26:49.247313 containerd[1600]: time="2025-11-01T00:26:49.247228374Z" level=info msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.340 [INFO][4044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.341 [INFO][4044] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" iface="eth0" netns="/var/run/netns/cni-0e78830e-1918-6cad-765c-739a454fedc8" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.342 [INFO][4044] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" iface="eth0" netns="/var/run/netns/cni-0e78830e-1918-6cad-765c-739a454fedc8" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.344 [INFO][4044] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" iface="eth0" netns="/var/run/netns/cni-0e78830e-1918-6cad-765c-739a454fedc8" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.344 [INFO][4044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.344 [INFO][4044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.435 [INFO][4053] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.437 [INFO][4053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.437 [INFO][4053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.444 [WARNING][4053] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.444 [INFO][4053] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.445 [INFO][4053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:49.452217 containerd[1600]: 2025-11-01 00:26:49.448 [INFO][4044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:26:49.453616 containerd[1600]: time="2025-11-01T00:26:49.453483113Z" level=info msg="TearDown network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" successfully" Nov 1 00:26:49.453616 containerd[1600]: time="2025-11-01T00:26:49.453516866Z" level=info msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" returns successfully" Nov 1 00:26:49.456121 systemd[1]: run-netns-cni\x2d0e78830e\x2d1918\x2d6cad\x2d765c\x2d739a454fedc8.mount: Deactivated successfully. Nov 1 00:26:49.621673 kubelet[2736]: I1101 00:26:49.621640 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-backend-key-pair\") pod \"3faeb474-0035-403d-ae2d-10880fff0c9f\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " Nov 1 00:26:49.621673 kubelet[2736]: I1101 00:26:49.621679 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-ca-bundle\") pod \"3faeb474-0035-403d-ae2d-10880fff0c9f\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " Nov 1 00:26:49.622276 kubelet[2736]: I1101 00:26:49.621724 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drgfc\" (UniqueName: \"kubernetes.io/projected/3faeb474-0035-403d-ae2d-10880fff0c9f-kube-api-access-drgfc\") pod \"3faeb474-0035-403d-ae2d-10880fff0c9f\" (UID: \"3faeb474-0035-403d-ae2d-10880fff0c9f\") " Nov 1 00:26:49.623316 kubelet[2736]: I1101 00:26:49.623242 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3faeb474-0035-403d-ae2d-10880fff0c9f" (UID: "3faeb474-0035-403d-ae2d-10880fff0c9f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:26:49.625250 kubelet[2736]: E1101 00:26:49.625221 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:49.629518 kubelet[2736]: I1101 00:26:49.629487 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3faeb474-0035-403d-ae2d-10880fff0c9f-kube-api-access-drgfc" (OuterVolumeSpecName: "kube-api-access-drgfc") pod "3faeb474-0035-403d-ae2d-10880fff0c9f" (UID: "3faeb474-0035-403d-ae2d-10880fff0c9f"). InnerVolumeSpecName "kube-api-access-drgfc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:26:49.630473 systemd[1]: var-lib-kubelet-pods-3faeb474\x2d0035\x2d403d\x2dae2d\x2d10880fff0c9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddrgfc.mount: Deactivated successfully. Nov 1 00:26:49.633395 kubelet[2736]: I1101 00:26:49.633364 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3faeb474-0035-403d-ae2d-10880fff0c9f" (UID: "3faeb474-0035-403d-ae2d-10880fff0c9f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:26:49.635535 systemd[1]: var-lib-kubelet-pods-3faeb474\x2d0035\x2d403d\x2dae2d\x2d10880fff0c9f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:26:49.668759 kubelet[2736]: I1101 00:26:49.668679 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q9cxd" podStartSLOduration=2.012970205 podStartE2EDuration="19.66865166s" podCreationTimestamp="2025-11-01 00:26:30 +0000 UTC" firstStartedPulling="2025-11-01 00:26:31.145110738 +0000 UTC m=+25.103219436" lastFinishedPulling="2025-11-01 00:26:48.800792202 +0000 UTC m=+42.758900891" observedRunningTime="2025-11-01 00:26:49.668504925 +0000 UTC m=+43.626613623" watchObservedRunningTime="2025-11-01 00:26:49.66865166 +0000 UTC m=+43.626760358" Nov 1 00:26:49.724773 kubelet[2736]: I1101 00:26:49.722878 2736 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drgfc\" (UniqueName: \"kubernetes.io/projected/3faeb474-0035-403d-ae2d-10880fff0c9f-kube-api-access-drgfc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:26:49.724773 kubelet[2736]: I1101 00:26:49.722932 2736 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:26:49.724773 kubelet[2736]: I1101 00:26:49.722946 2736 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3faeb474-0035-403d-ae2d-10880fff0c9f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:26:50.026001 kubelet[2736]: I1101 00:26:50.025839 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97c27\" (UniqueName: \"kubernetes.io/projected/65dc1f21-74ec-412a-ad9c-6e2587acdbb7-kube-api-access-97c27\") pod \"whisker-55d87fdf9f-wx6gc\" (UID: \"65dc1f21-74ec-412a-ad9c-6e2587acdbb7\") " pod="calico-system/whisker-55d87fdf9f-wx6gc" Nov 1 00:26:50.026001 kubelet[2736]: I1101 00:26:50.025889 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/65dc1f21-74ec-412a-ad9c-6e2587acdbb7-whisker-backend-key-pair\") pod \"whisker-55d87fdf9f-wx6gc\" (UID: \"65dc1f21-74ec-412a-ad9c-6e2587acdbb7\") " pod="calico-system/whisker-55d87fdf9f-wx6gc" Nov 1 00:26:50.026001 kubelet[2736]: I1101 00:26:50.025909 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65dc1f21-74ec-412a-ad9c-6e2587acdbb7-whisker-ca-bundle\") pod \"whisker-55d87fdf9f-wx6gc\" (UID: \"65dc1f21-74ec-412a-ad9c-6e2587acdbb7\") " pod="calico-system/whisker-55d87fdf9f-wx6gc" Nov 1 00:26:50.255480 kubelet[2736]: I1101 00:26:50.255438 2736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3faeb474-0035-403d-ae2d-10880fff0c9f" path="/var/lib/kubelet/pods/3faeb474-0035-403d-ae2d-10880fff0c9f/volumes" Nov 1 00:26:50.294106 containerd[1600]: time="2025-11-01T00:26:50.293949027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d87fdf9f-wx6gc,Uid:65dc1f21-74ec-412a-ad9c-6e2587acdbb7,Namespace:calico-system,Attempt:0,}" Nov 1 00:26:50.626585 kubelet[2736]: I1101 00:26:50.626528 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:26:50.627209 kubelet[2736]: E1101 00:26:50.627028 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:50.936695 kernel: bpftool[4192]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:26:51.221380 systemd-networkd[1274]: calia2d56e511f6: Link UP Nov 1 00:26:51.223666 systemd-networkd[1274]: calia2d56e511f6: Gained carrier Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:50.992 [INFO][4177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0 whisker-55d87fdf9f- calico-system 65dc1f21-74ec-412a-ad9c-6e2587acdbb7 949 0 2025-11-01 00:26:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55d87fdf9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55d87fdf9f-wx6gc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia2d56e511f6 [] [] }} ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:50.993 [INFO][4177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.086 [INFO][4199] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" HandleID="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Workload="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.086 [INFO][4199] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" HandleID="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Workload="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55d87fdf9f-wx6gc", "timestamp":"2025-11-01 00:26:51.086202322 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.086 [INFO][4199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.086 [INFO][4199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.086 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.105 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.131 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.146 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.153 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.156 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.157 [INFO][4199] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.159 [INFO][4199] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.168 [INFO][4199] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.188 [INFO][4199] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.188 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" host="localhost" Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.188 [INFO][4199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:51.262508 containerd[1600]: 2025-11-01 00:26:51.188 [INFO][4199] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" HandleID="k8s-pod-network.4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Workload="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.197 [INFO][4177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0", GenerateName:"whisker-55d87fdf9f-", Namespace:"calico-system", SelfLink:"", UID:"65dc1f21-74ec-412a-ad9c-6e2587acdbb7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55d87fdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55d87fdf9f-wx6gc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2d56e511f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.197 [INFO][4177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.197 [INFO][4177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2d56e511f6 ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.225 [INFO][4177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.226 [INFO][4177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0", GenerateName:"whisker-55d87fdf9f-", Namespace:"calico-system", SelfLink:"", UID:"65dc1f21-74ec-412a-ad9c-6e2587acdbb7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55d87fdf9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb", Pod:"whisker-55d87fdf9f-wx6gc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia2d56e511f6", MAC:"86:a7:0a:43:04:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:51.263762 containerd[1600]: 2025-11-01 00:26:51.257 [INFO][4177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb" Namespace="calico-system" Pod="whisker-55d87fdf9f-wx6gc" WorkloadEndpoint="localhost-k8s-whisker--55d87fdf9f--wx6gc-eth0" Nov 1 00:26:51.357389 containerd[1600]: time="2025-11-01T00:26:51.356615566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:51.357389 containerd[1600]: time="2025-11-01T00:26:51.356721635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:51.357389 containerd[1600]: time="2025-11-01T00:26:51.356741121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:51.357389 containerd[1600]: time="2025-11-01T00:26:51.356876806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:51.458821 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:51.476610 systemd-networkd[1274]: vxlan.calico: Link UP Nov 1 00:26:51.476621 systemd-networkd[1274]: vxlan.calico: Gained carrier Nov 1 00:26:51.528546 containerd[1600]: time="2025-11-01T00:26:51.528497240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d87fdf9f-wx6gc,Uid:65dc1f21-74ec-412a-ad9c-6e2587acdbb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4413eea724077875b1743798f2327ba762af2faf3d6c0ebd7a17c51c20f183bb\"" Nov 1 00:26:51.536563 containerd[1600]: time="2025-11-01T00:26:51.534097536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:26:51.862261 containerd[1600]: time="2025-11-01T00:26:51.862165461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:51.894050 containerd[1600]: time="2025-11-01T00:26:51.893311036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:26:51.907276 containerd[1600]: time="2025-11-01T00:26:51.893479572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:26:51.907853 kubelet[2736]: E1101 00:26:51.907793 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:26:51.909608 kubelet[2736]: E1101 00:26:51.907888 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:26:51.910308 kubelet[2736]: E1101 00:26:51.910236 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:846d76463aa0425393ff76a8db3a1708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:51.914463 containerd[1600]: time="2025-11-01T00:26:51.913439402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:26:52.245209 containerd[1600]: time="2025-11-01T00:26:52.244945928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:52.293976 containerd[1600]: time="2025-11-01T00:26:52.293821043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:26:52.293976 containerd[1600]: time="2025-11-01T00:26:52.293970464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:52.294321 kubelet[2736]: E1101 00:26:52.294229 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:26:52.294321 kubelet[2736]: E1101 00:26:52.294312 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:26:52.294537 kubelet[2736]: E1101 00:26:52.294465 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:52.296084 kubelet[2736]: E1101 00:26:52.295976 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:26:52.636962 kubelet[2736]: E1101 00:26:52.636789 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:26:52.645793 systemd-networkd[1274]: vxlan.calico: Gained IPv6LL Nov 1 00:26:53.284677 systemd-networkd[1274]: calia2d56e511f6: Gained IPv6LL Nov 1 00:26:54.162812 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:44048.service - OpenSSH per-connection server daemon (10.0.0.1:44048). Nov 1 00:26:54.198956 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 44048 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:26:54.201083 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:54.205462 systemd-logind[1574]: New session 11 of user core. Nov 1 00:26:54.215756 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:26:54.358753 sshd[4355]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:54.363457 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:44048.service: Deactivated successfully. Nov 1 00:26:54.366510 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:26:54.366586 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:26:54.367930 systemd-logind[1574]: Removed session 11. Nov 1 00:26:54.491468 kubelet[2736]: I1101 00:26:54.491278 2736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:26:54.491967 kubelet[2736]: E1101 00:26:54.491950 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:54.653963 kubelet[2736]: E1101 00:26:54.653909 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:54.677162 systemd[1]: run-containerd-runc-k8s.io-eb7821519548284175fb058e5fd85aaa200dd0724b646f1c4b4c6109a80be618-runc.MePVkL.mount: Deactivated successfully. Nov 1 00:26:55.249729 containerd[1600]: time="2025-11-01T00:26:55.249623469Z" level=info msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" Nov 1 00:26:55.250581 containerd[1600]: time="2025-11-01T00:26:55.250502059Z" level=info msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.303 [INFO][4439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.303 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" iface="eth0" netns="/var/run/netns/cni-e7c9919e-87e9-66d7-887a-2da77a159064" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.304 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" iface="eth0" netns="/var/run/netns/cni-e7c9919e-87e9-66d7-887a-2da77a159064" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.304 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" iface="eth0" netns="/var/run/netns/cni-e7c9919e-87e9-66d7-887a-2da77a159064" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.304 [INFO][4439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.304 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.333 [INFO][4454] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.335 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.335 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.342 [WARNING][4454] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.342 [INFO][4454] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.344 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:55.350246 containerd[1600]: 2025-11-01 00:26:55.347 [INFO][4439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:26:55.353517 containerd[1600]: time="2025-11-01T00:26:55.353464925Z" level=info msg="TearDown network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" successfully" Nov 1 00:26:55.353517 containerd[1600]: time="2025-11-01T00:26:55.353500893Z" level=info msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" returns successfully" Nov 1 00:26:55.353724 systemd[1]: run-netns-cni\x2de7c9919e\x2d87e9\x2d66d7\x2d887a\x2d2da77a159064.mount: Deactivated successfully. Nov 1 00:26:55.354622 containerd[1600]: time="2025-11-01T00:26:55.354562222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77dccc7d57-zdj9g,Uid:26210ce5-453a-47fe-b5c4-bb7d1e50d30b,Namespace:calico-system,Attempt:1,}" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.306 [INFO][4440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.307 [INFO][4440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" iface="eth0" netns="/var/run/netns/cni-1cb65b59-2983-745a-e12a-5ccd66fff666" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.307 [INFO][4440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" iface="eth0" netns="/var/run/netns/cni-1cb65b59-2983-745a-e12a-5ccd66fff666" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.307 [INFO][4440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" iface="eth0" netns="/var/run/netns/cni-1cb65b59-2983-745a-e12a-5ccd66fff666" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.307 [INFO][4440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.307 [INFO][4440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.337 [INFO][4460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.338 [INFO][4460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.344 [INFO][4460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.350 [WARNING][4460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.350 [INFO][4460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.355 [INFO][4460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:55.361911 containerd[1600]: 2025-11-01 00:26:55.358 [INFO][4440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:26:55.362452 containerd[1600]: time="2025-11-01T00:26:55.362113782Z" level=info msg="TearDown network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" successfully" Nov 1 00:26:55.362452 containerd[1600]: time="2025-11-01T00:26:55.362155922Z" level=info msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" returns successfully" Nov 1 00:26:55.364609 containerd[1600]: time="2025-11-01T00:26:55.364573076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-92gsc,Uid:151d3855-7594-4722-a64f-ba8ae7061d01,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:26:55.365310 systemd[1]: run-netns-cni\x2d1cb65b59\x2d2983\x2d745a\x2de12a\x2d5ccd66fff666.mount: Deactivated successfully. Nov 1 00:26:55.485600 systemd-networkd[1274]: cali936739c8dc6: Link UP Nov 1 00:26:55.485804 systemd-networkd[1274]: cali936739c8dc6: Gained carrier Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.412 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0 calico-kube-controllers-77dccc7d57- calico-system 26210ce5-453a-47fe-b5c4-bb7d1e50d30b 1008 0 2025-11-01 00:26:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77dccc7d57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77dccc7d57-zdj9g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali936739c8dc6 [] [] }} ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.412 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.443 [INFO][4499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" HandleID="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.443 [INFO][4499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" HandleID="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77dccc7d57-zdj9g", "timestamp":"2025-11-01 00:26:55.443465074 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.443 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.443 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.443 [INFO][4499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.450 [INFO][4499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.455 [INFO][4499] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.462 [INFO][4499] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.464 [INFO][4499] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.466 [INFO][4499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.466 [INFO][4499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.468 [INFO][4499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4 Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.472 [INFO][4499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.478 [INFO][4499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.478 [INFO][4499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" host="localhost" Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.479 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:55.502665 containerd[1600]: 2025-11-01 00:26:55.479 [INFO][4499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" HandleID="k8s-pod-network.96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.482 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0", GenerateName:"calico-kube-controllers-77dccc7d57-", Namespace:"calico-system", SelfLink:"", UID:"26210ce5-453a-47fe-b5c4-bb7d1e50d30b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77dccc7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77dccc7d57-zdj9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936739c8dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.482 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.482 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali936739c8dc6 ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.484 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.485 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0", GenerateName:"calico-kube-controllers-77dccc7d57-", Namespace:"calico-system", SelfLink:"", UID:"26210ce5-453a-47fe-b5c4-bb7d1e50d30b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77dccc7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4", Pod:"calico-kube-controllers-77dccc7d57-zdj9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936739c8dc6", MAC:"2a:6b:70:e1:fb:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:55.503305 containerd[1600]: 2025-11-01 00:26:55.493 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4" Namespace="calico-system" Pod="calico-kube-controllers-77dccc7d57-zdj9g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:26:55.531255 containerd[1600]: time="2025-11-01T00:26:55.531152294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:55.531255 containerd[1600]: time="2025-11-01T00:26:55.531235441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:55.531693 containerd[1600]: time="2025-11-01T00:26:55.531277161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:55.531693 containerd[1600]: time="2025-11-01T00:26:55.531463852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:55.572302 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:55.591492 systemd-networkd[1274]: calie56fabf58e6: Link UP Nov 1 00:26:55.592304 systemd-networkd[1274]: calie56fabf58e6: Gained carrier Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.421 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0 calico-apiserver-599794c67d- calico-apiserver 151d3855-7594-4722-a64f-ba8ae7061d01 1009 0 2025-11-01 00:26:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599794c67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-599794c67d-92gsc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie56fabf58e6 [] [] }} ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.421 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.451 [INFO][4505] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" HandleID="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.451 [INFO][4505] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" HandleID="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-599794c67d-92gsc", "timestamp":"2025-11-01 00:26:55.451235867 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.451 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.479 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.479 [INFO][4505] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.552 [INFO][4505] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.561 [INFO][4505] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.565 [INFO][4505] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.567 [INFO][4505] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.570 [INFO][4505] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.570 [INFO][4505] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.571 [INFO][4505] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88 Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.576 [INFO][4505] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.582 [INFO][4505] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.583 [INFO][4505] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" host="localhost" Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.583 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:55.609744 containerd[1600]: 2025-11-01 00:26:55.583 [INFO][4505] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" HandleID="k8s-pod-network.c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.586 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"151d3855-7594-4722-a64f-ba8ae7061d01", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-599794c67d-92gsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie56fabf58e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.587 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.587 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie56fabf58e6 ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.593 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.594 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"151d3855-7594-4722-a64f-ba8ae7061d01", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88", Pod:"calico-apiserver-599794c67d-92gsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie56fabf58e6", MAC:"ca:dd:f9:c8:3f:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:55.610424 containerd[1600]: 2025-11-01 00:26:55.605 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-92gsc" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:26:55.612695 containerd[1600]: time="2025-11-01T00:26:55.612569551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77dccc7d57-zdj9g,Uid:26210ce5-453a-47fe-b5c4-bb7d1e50d30b,Namespace:calico-system,Attempt:1,} returns sandbox id \"96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4\"" Nov 1 00:26:55.616637 containerd[1600]: time="2025-11-01T00:26:55.616590899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:26:55.641564 containerd[1600]: time="2025-11-01T00:26:55.641183389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:55.641564 containerd[1600]: time="2025-11-01T00:26:55.641262227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:55.641564 containerd[1600]: time="2025-11-01T00:26:55.641288798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:55.641564 containerd[1600]: time="2025-11-01T00:26:55.641466763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:55.673456 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:55.703908 containerd[1600]: time="2025-11-01T00:26:55.703847959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-92gsc,Uid:151d3855-7594-4722-a64f-ba8ae7061d01,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88\"" Nov 1 00:26:55.980362 containerd[1600]: time="2025-11-01T00:26:55.980276891Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:56.114491 containerd[1600]: time="2025-11-01T00:26:56.114401657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:56.114666 containerd[1600]: time="2025-11-01T00:26:56.114441926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:26:56.114781 kubelet[2736]: E1101 00:26:56.114726 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:56.115300 kubelet[2736]: E1101 00:26:56.114782 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:56.115300 kubelet[2736]: E1101 00:26:56.115070 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxtr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77dccc7d57-zdj9g_calico-system(26210ce5-453a-47fe-b5c4-bb7d1e50d30b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:56.115525 containerd[1600]: time="2025-11-01T00:26:56.115231615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:26:56.116300 kubelet[2736]: E1101 00:26:56.116264 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:26:56.252584 containerd[1600]: time="2025-11-01T00:26:56.252140305Z" level=info msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.300 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.300 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" iface="eth0" netns="/var/run/netns/cni-bcbc3dc4-f0c5-6596-6819-486b5c80d9e6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.301 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" iface="eth0" netns="/var/run/netns/cni-bcbc3dc4-f0c5-6596-6819-486b5c80d9e6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.301 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" iface="eth0" netns="/var/run/netns/cni-bcbc3dc4-f0c5-6596-6819-486b5c80d9e6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.301 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.301 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.328 [INFO][4637] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.328 [INFO][4637] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.328 [INFO][4637] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.333 [WARNING][4637] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.334 [INFO][4637] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.335 [INFO][4637] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:56.340965 containerd[1600]: 2025-11-01 00:26:56.338 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:26:56.341749 containerd[1600]: time="2025-11-01T00:26:56.341685546Z" level=info msg="TearDown network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" successfully" Nov 1 00:26:56.341749 containerd[1600]: time="2025-11-01T00:26:56.341725334Z" level=info msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" returns successfully" Nov 1 00:26:56.342611 containerd[1600]: time="2025-11-01T00:26:56.342569739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vv2f4,Uid:e7c7365e-fed9-44a2-bb07-9942249f952b,Namespace:calico-system,Attempt:1,}" Nov 1 00:26:56.345550 systemd[1]: run-netns-cni\x2dbcbc3dc4\x2df0c5\x2d6596\x2d6819\x2d486b5c80d9e6.mount: Deactivated successfully. Nov 1 00:26:56.454965 systemd-networkd[1274]: cali8a249a30370: Link UP Nov 1 00:26:56.455268 systemd-networkd[1274]: cali8a249a30370: Gained carrier Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.390 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vv2f4-eth0 goldmane-666569f655- calico-system e7c7365e-fed9-44a2-bb07-9942249f952b 1027 0 2025-11-01 00:26:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vv2f4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8a249a30370 [] [] }} ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.390 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.419 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" HandleID="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.419 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" HandleID="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vv2f4", "timestamp":"2025-11-01 00:26:56.419186767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.419 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.419 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.419 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.426 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.430 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.433 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.435 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.437 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.437 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.438 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2 Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.442 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.448 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.448 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" host="localhost" Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.448 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:56.479831 containerd[1600]: 2025-11-01 00:26:56.448 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" HandleID="k8s-pod-network.6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.452 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vv2f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e7c7365e-fed9-44a2-bb07-9942249f952b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vv2f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a249a30370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.452 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.452 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a249a30370 ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.455 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.457 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vv2f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e7c7365e-fed9-44a2-bb07-9942249f952b", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2", Pod:"goldmane-666569f655-vv2f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a249a30370", MAC:"66:6f:bd:6d:86:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:56.480750 containerd[1600]: 2025-11-01 00:26:56.474 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2" Namespace="calico-system" Pod="goldmane-666569f655-vv2f4" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:26:56.506879 containerd[1600]: time="2025-11-01T00:26:56.505763949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:56.506879 containerd[1600]: time="2025-11-01T00:26:56.506669533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:56.506879 containerd[1600]: time="2025-11-01T00:26:56.506687528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:56.507073 containerd[1600]: time="2025-11-01T00:26:56.506861134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:56.510080 containerd[1600]: time="2025-11-01T00:26:56.510007058Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:56.512370 containerd[1600]: time="2025-11-01T00:26:56.512219373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:26:56.512370 containerd[1600]: time="2025-11-01T00:26:56.512247017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:56.512519 kubelet[2736]: E1101 00:26:56.512485 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:56.512578 kubelet[2736]: E1101 00:26:56.512540 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:26:56.512714 kubelet[2736]: E1101 00:26:56.512667 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mp69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-92gsc_calico-apiserver(151d3855-7594-4722-a64f-ba8ae7061d01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:56.515071 kubelet[2736]: E1101 00:26:56.515009 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:26:56.542013 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:56.569998 containerd[1600]: time="2025-11-01T00:26:56.569944946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vv2f4,Uid:e7c7365e-fed9-44a2-bb07-9942249f952b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2\"" Nov 1 00:26:56.573049 containerd[1600]: time="2025-11-01T00:26:56.571696609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:26:56.647739 kubelet[2736]: E1101 00:26:56.647651 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:26:56.649575 kubelet[2736]: E1101 00:26:56.649524 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:26:56.741512 systemd-networkd[1274]: cali936739c8dc6: Gained IPv6LL Nov 1 00:26:56.872696 containerd[1600]: time="2025-11-01T00:26:56.872626081Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:56.873988 containerd[1600]: time="2025-11-01T00:26:56.873937180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:26:56.874139 containerd[1600]: time="2025-11-01T00:26:56.874034599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:26:56.874344 kubelet[2736]: E1101 00:26:56.874280 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:56.874405 kubelet[2736]: E1101 00:26:56.874371 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:26:56.874639 kubelet[2736]: E1101 00:26:56.874556 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56b8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vv2f4_calico-system(e7c7365e-fed9-44a2-bb07-9942249f952b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:56.876470 kubelet[2736]: E1101 00:26:56.876414 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:26:57.380538 systemd-networkd[1274]: calie56fabf58e6: Gained IPv6LL Nov 1 00:26:57.652475 kubelet[2736]: E1101 00:26:57.652262 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:26:57.652475 kubelet[2736]: E1101 00:26:57.652283 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:26:57.700586 systemd-networkd[1274]: cali8a249a30370: Gained IPv6LL Nov 1 00:26:58.249767 containerd[1600]: time="2025-11-01T00:26:58.249688399Z" level=info msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" Nov 1 00:26:58.250399 containerd[1600]: time="2025-11-01T00:26:58.249791999Z" level=info msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" Nov 1 00:26:58.250549 containerd[1600]: time="2025-11-01T00:26:58.250523033Z" level=info msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.816 [INFO][4763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.818 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" iface="eth0" netns="/var/run/netns/cni-72b2bbbd-cd54-6672-9f79-42f1dcb3e23c" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.819 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" iface="eth0" netns="/var/run/netns/cni-72b2bbbd-cd54-6672-9f79-42f1dcb3e23c" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" iface="eth0" netns="/var/run/netns/cni-72b2bbbd-cd54-6672-9f79-42f1dcb3e23c" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.849 [INFO][4787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.854 [INFO][4787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.854 [INFO][4787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.863 [WARNING][4787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.863 [INFO][4787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.867 [INFO][4787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:58.879114 containerd[1600]: 2025-11-01 00:26:58.873 [INFO][4763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:26:58.881039 containerd[1600]: time="2025-11-01T00:26:58.879569346Z" level=info msg="TearDown network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" successfully" Nov 1 00:26:58.881039 containerd[1600]: time="2025-11-01T00:26:58.879603372Z" level=info msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" returns successfully" Nov 1 00:26:58.881977 kubelet[2736]: E1101 00:26:58.880267 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:58.885192 systemd[1]: run-netns-cni\x2d72b2bbbd\x2dcd54\x2d6672\x2d9f79\x2d42f1dcb3e23c.mount: Deactivated successfully. Nov 1 00:26:58.887167 containerd[1600]: time="2025-11-01T00:26:58.883329057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghftw,Uid:29f73971-767b-4aac-baa4-25b13c4b42ec,Namespace:kube-system,Attempt:1,}" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.818 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.819 [INFO][4762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" iface="eth0" netns="/var/run/netns/cni-7189bbda-5282-bcc6-23eb-20346cfd4558" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.820 [INFO][4762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" iface="eth0" netns="/var/run/netns/cni-7189bbda-5282-bcc6-23eb-20346cfd4558" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" iface="eth0" netns="/var/run/netns/cni-7189bbda-5282-bcc6-23eb-20346cfd4558" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.854 [INFO][4786] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.855 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.867 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.874 [WARNING][4786] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.874 [INFO][4786] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.876 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:58.887167 containerd[1600]: 2025-11-01 00:26:58.882 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:26:58.893163 containerd[1600]: time="2025-11-01T00:26:58.890796167Z" level=info msg="TearDown network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" successfully" Nov 1 00:26:58.893163 containerd[1600]: time="2025-11-01T00:26:58.890825875Z" level=info msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" returns successfully" Nov 1 00:26:58.893163 containerd[1600]: time="2025-11-01T00:26:58.891613117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjdb4,Uid:30524ccc-9256-4e38-a18e-44025e0e57e8,Namespace:kube-system,Attempt:1,}" Nov 1 00:26:58.893492 kubelet[2736]: E1101 00:26:58.891182 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:58.894885 systemd[1]: run-netns-cni\x2d7189bbda\x2d5282\x2dbcc6\x2d23eb\x2d20346cfd4558.mount: Deactivated successfully. Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.821 [INFO][4761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.822 [INFO][4761] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" iface="eth0" netns="/var/run/netns/cni-570c7bf6-d17e-60aa-ce6d-fd58001c9f02" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.823 [INFO][4761] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" iface="eth0" netns="/var/run/netns/cni-570c7bf6-d17e-60aa-ce6d-fd58001c9f02" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.823 [INFO][4761] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" iface="eth0" netns="/var/run/netns/cni-570c7bf6-d17e-60aa-ce6d-fd58001c9f02" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.823 [INFO][4761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.823 [INFO][4761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.875 [INFO][4789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.875 [INFO][4789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.876 [INFO][4789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.890 [WARNING][4789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.890 [INFO][4789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.896 [INFO][4789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:58.904771 containerd[1600]: 2025-11-01 00:26:58.901 [INFO][4761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:26:58.906763 containerd[1600]: time="2025-11-01T00:26:58.904976320Z" level=info msg="TearDown network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" successfully" Nov 1 00:26:58.906763 containerd[1600]: time="2025-11-01T00:26:58.905003603Z" level=info msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" returns successfully" Nov 1 00:26:58.907925 containerd[1600]: time="2025-11-01T00:26:58.907577811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-gvjds,Uid:18da393d-9f84-487e-a8ed-8cbdbb46de00,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:26:58.909781 systemd[1]: run-netns-cni\x2d570c7bf6\x2dd17e\x2d60aa\x2dce6d\x2dfd58001c9f02.mount: Deactivated successfully. Nov 1 00:26:59.054766 systemd-networkd[1274]: cali2d16da40201: Link UP Nov 1 00:26:59.055706 systemd-networkd[1274]: cali2d16da40201: Gained carrier Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:58.977 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0 coredns-668d6bf9bc- kube-system 30524ccc-9256-4e38-a18e-44025e0e57e8 1065 0 2025-11-01 00:26:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qjdb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d16da40201 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:58.979 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.011 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" HandleID="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.011 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" HandleID="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4120), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qjdb4", "timestamp":"2025-11-01 00:26:59.011101677 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.011 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.011 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.011 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.017 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.022 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.026 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.028 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.031 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.031 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.032 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043 Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.036 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.043 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.043 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" host="localhost" Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.043 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:59.078177 containerd[1600]: 2025-11-01 00:26:59.043 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" HandleID="k8s-pod-network.4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.048 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30524ccc-9256-4e38-a18e-44025e0e57e8", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qjdb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d16da40201", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.048 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.048 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d16da40201 ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.056 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.056 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30524ccc-9256-4e38-a18e-44025e0e57e8", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043", Pod:"coredns-668d6bf9bc-qjdb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d16da40201", MAC:"4e:9b:bc:aa:c4:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.079660 containerd[1600]: 2025-11-01 00:26:59.074 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043" Namespace="kube-system" Pod="coredns-668d6bf9bc-qjdb4" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:26:59.118078 containerd[1600]: time="2025-11-01T00:26:59.117932698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:59.118264 containerd[1600]: time="2025-11-01T00:26:59.118186868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:59.118264 containerd[1600]: time="2025-11-01T00:26:59.118247937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.120358 containerd[1600]: time="2025-11-01T00:26:59.118612692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.173272 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:59.204435 containerd[1600]: time="2025-11-01T00:26:59.204385321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qjdb4,Uid:30524ccc-9256-4e38-a18e-44025e0e57e8,Namespace:kube-system,Attempt:1,} returns sandbox id \"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043\"" Nov 1 00:26:59.205297 kubelet[2736]: E1101 00:26:59.205273 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:59.207970 containerd[1600]: time="2025-11-01T00:26:59.207751606Z" level=info msg="CreateContainer within sandbox \"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:26:59.372568 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:51208.service - OpenSSH per-connection server daemon (10.0.0.1:51208). Nov 1 00:26:59.410174 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 51208 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:26:59.412459 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:59.417023 systemd-logind[1574]: New session 12 of user core. Nov 1 00:26:59.426828 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:26:59.587374 containerd[1600]: time="2025-11-01T00:26:59.586116645Z" level=info msg="CreateContainer within sandbox \"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dcd356463b7ff620e8e26a1306a26326bc367e0731f6bbc52d28b21ea54b300d\"" Nov 1 00:26:59.587831 containerd[1600]: time="2025-11-01T00:26:59.587495219Z" level=info msg="StartContainer for \"dcd356463b7ff620e8e26a1306a26326bc367e0731f6bbc52d28b21ea54b300d\"" Nov 1 00:26:59.592039 systemd-networkd[1274]: calida0d58d341d: Link UP Nov 1 00:26:59.593377 systemd-networkd[1274]: calida0d58d341d: Gained carrier Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:58.976 [INFO][4811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ghftw-eth0 coredns-668d6bf9bc- kube-system 29f73971-767b-4aac-baa4-25b13c4b42ec 1066 0 2025-11-01 00:26:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ghftw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida0d58d341d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:58.976 [INFO][4811] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.013 [INFO][4851] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" HandleID="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.014 [INFO][4851] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" HandleID="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ghftw", "timestamp":"2025-11-01 00:26:59.013925503 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.014 [INFO][4851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.044 [INFO][4851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.044 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.133 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.144 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.148 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.151 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.154 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.154 [INFO][4851] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.157 [INFO][4851] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.208 [INFO][4851] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.541 [INFO][4851] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.541 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" host="localhost" Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.541 [INFO][4851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:59.636211 containerd[1600]: 2025-11-01 00:26:59.541 [INFO][4851] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" HandleID="k8s-pod-network.5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.564 [INFO][4811] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ghftw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f73971-767b-4aac-baa4-25b13c4b42ec", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ghftw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida0d58d341d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.567 [INFO][4811] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.569 [INFO][4811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida0d58d341d ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.593 [INFO][4811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.598 [INFO][4811] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ghftw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f73971-767b-4aac-baa4-25b13c4b42ec", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d", Pod:"coredns-668d6bf9bc-ghftw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida0d58d341d", MAC:"d2:57:2c:f0:fb:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.637053 containerd[1600]: 2025-11-01 00:26:59.617 [INFO][4811] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d" Namespace="kube-system" Pod="coredns-668d6bf9bc-ghftw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:26:59.679133 systemd-networkd[1274]: cali5f6ca2cb91d: Link UP Nov 1 00:26:59.688882 systemd-networkd[1274]: cali5f6ca2cb91d: Gained carrier Nov 1 00:26:59.703282 sshd[4928]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:59.717120 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:51208.service: Deactivated successfully. Nov 1 00:26:59.729748 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:26:59.737875 containerd[1600]: time="2025-11-01T00:26:59.715189554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:59.737875 containerd[1600]: time="2025-11-01T00:26:59.715272835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:59.737875 containerd[1600]: time="2025-11-01T00:26:59.715287593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.737875 containerd[1600]: time="2025-11-01T00:26:59.718069598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.736755 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:26:59.739622 systemd-logind[1574]: Removed session 12. Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:58.980 [INFO][4827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0 calico-apiserver-599794c67d- calico-apiserver 18da393d-9f84-487e-a8ed-8cbdbb46de00 1067 0 2025-11-01 00:26:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:599794c67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-599794c67d-gvjds eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f6ca2cb91d [] [] }} ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:58.980 [INFO][4827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.015 [INFO][4860] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" HandleID="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.015 [INFO][4860] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" HandleID="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-599794c67d-gvjds", "timestamp":"2025-11-01 00:26:59.015055297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.015 [INFO][4860] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.547 [INFO][4860] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.547 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.565 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.581 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.597 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.601 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.604 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.604 [INFO][4860] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.609 [INFO][4860] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.617 [INFO][4860] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.626 [INFO][4860] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.637 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" host="localhost" Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.640 [INFO][4860] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:26:59.741321 containerd[1600]: 2025-11-01 00:26:59.640 [INFO][4860] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" HandleID="k8s-pod-network.030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.669 [INFO][4827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"18da393d-9f84-487e-a8ed-8cbdbb46de00", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-599794c67d-gvjds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6ca2cb91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.669 [INFO][4827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.669 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f6ca2cb91d ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.690 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.702 [INFO][4827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"18da393d-9f84-487e-a8ed-8cbdbb46de00", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c", Pod:"calico-apiserver-599794c67d-gvjds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6ca2cb91d", MAC:"32:08:ee:a4:4a:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:26:59.742378 containerd[1600]: 2025-11-01 00:26:59.720 [INFO][4827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c" Namespace="calico-apiserver" Pod="calico-apiserver-599794c67d-gvjds" WorkloadEndpoint="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:26:59.748110 containerd[1600]: time="2025-11-01T00:26:59.748047786Z" level=info msg="StartContainer for \"dcd356463b7ff620e8e26a1306a26326bc367e0731f6bbc52d28b21ea54b300d\" returns successfully" Nov 1 00:26:59.780190 containerd[1600]: time="2025-11-01T00:26:59.779817996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:26:59.780595 containerd[1600]: time="2025-11-01T00:26:59.780197649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:26:59.780654 containerd[1600]: time="2025-11-01T00:26:59.780580158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.781650 containerd[1600]: time="2025-11-01T00:26:59.781509304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:26:59.787941 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:59.837923 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:26:59.843057 containerd[1600]: time="2025-11-01T00:26:59.843005283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ghftw,Uid:29f73971-767b-4aac-baa4-25b13c4b42ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d\"" Nov 1 00:26:59.844371 kubelet[2736]: E1101 00:26:59.843882 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:26:59.846819 containerd[1600]: time="2025-11-01T00:26:59.846787602Z" level=info msg="CreateContainer within sandbox \"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:26:59.875483 containerd[1600]: time="2025-11-01T00:26:59.875424637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-599794c67d-gvjds,Uid:18da393d-9f84-487e-a8ed-8cbdbb46de00,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c\"" Nov 1 00:26:59.877219 containerd[1600]: time="2025-11-01T00:26:59.877028748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:00.436484 containerd[1600]: time="2025-11-01T00:27:00.436423967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:00.672978 kubelet[2736]: E1101 00:27:00.671840 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:00.723450 containerd[1600]: time="2025-11-01T00:27:00.723247446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:00.723450 containerd[1600]: time="2025-11-01T00:27:00.723295239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:00.723927 kubelet[2736]: E1101 00:27:00.723563 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:00.723927 kubelet[2736]: E1101 00:27:00.723634 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:00.723927 kubelet[2736]: E1101 00:27:00.723804 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgjns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-gvjds_calico-apiserver(18da393d-9f84-487e-a8ed-8cbdbb46de00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:00.725034 kubelet[2736]: E1101 00:27:00.724991 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:00.756318 containerd[1600]: time="2025-11-01T00:27:00.756246170Z" level=info msg="CreateContainer within sandbox \"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6631937b779cd08fcbcbfaeda9807ce96c7f48a140c932b9ee69e15fc54fee6\"" Nov 1 00:27:00.759679 containerd[1600]: time="2025-11-01T00:27:00.758111161Z" level=info msg="StartContainer for \"f6631937b779cd08fcbcbfaeda9807ce96c7f48a140c932b9ee69e15fc54fee6\"" Nov 1 00:27:00.764958 kubelet[2736]: I1101 00:27:00.764101 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qjdb4" podStartSLOduration=48.764067843 podStartE2EDuration="48.764067843s" podCreationTimestamp="2025-11-01 00:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:27:00.760544359 +0000 UTC m=+54.718653057" watchObservedRunningTime="2025-11-01 00:27:00.764067843 +0000 UTC m=+54.722176541" Nov 1 00:27:00.837233 containerd[1600]: time="2025-11-01T00:27:00.837083742Z" level=info msg="StartContainer for \"f6631937b779cd08fcbcbfaeda9807ce96c7f48a140c932b9ee69e15fc54fee6\" returns successfully" Nov 1 00:27:00.883619 systemd[1]: run-containerd-runc-k8s.io-f6631937b779cd08fcbcbfaeda9807ce96c7f48a140c932b9ee69e15fc54fee6-runc.ASUrmB.mount: Deactivated successfully. Nov 1 00:27:00.900492 systemd-networkd[1274]: calida0d58d341d: Gained IPv6LL Nov 1 00:27:00.964636 systemd-networkd[1274]: cali2d16da40201: Gained IPv6LL Nov 1 00:27:01.028653 systemd-networkd[1274]: cali5f6ca2cb91d: Gained IPv6LL Nov 1 00:27:01.249448 containerd[1600]: time="2025-11-01T00:27:01.249394159Z" level=info msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" iface="eth0" netns="/var/run/netns/cni-43c443ee-fb9c-00a6-78fb-2c9e3960c97f" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" iface="eth0" netns="/var/run/netns/cni-43c443ee-fb9c-00a6-78fb-2c9e3960c97f" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" iface="eth0" netns="/var/run/netns/cni-43c443ee-fb9c-00a6-78fb-2c9e3960c97f" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.500 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.526 [INFO][5154] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.526 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.526 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.533 [WARNING][5154] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.533 [INFO][5154] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.535 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:01.542839 containerd[1600]: 2025-11-01 00:27:01.539 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:01.543619 containerd[1600]: time="2025-11-01T00:27:01.543016858Z" level=info msg="TearDown network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" successfully" Nov 1 00:27:01.543619 containerd[1600]: time="2025-11-01T00:27:01.543045634Z" level=info msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" returns successfully" Nov 1 00:27:01.544028 containerd[1600]: time="2025-11-01T00:27:01.543965127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhvvn,Uid:f97e1baa-80d7-4279-b761-fdf55a406885,Namespace:calico-system,Attempt:1,}" Nov 1 00:27:01.546756 systemd[1]: run-netns-cni\x2d43c443ee\x2dfb9c\x2d00a6\x2d78fb\x2d2c9e3960c97f.mount: Deactivated successfully. Nov 1 00:27:01.681414 kubelet[2736]: E1101 00:27:01.680246 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:01.681414 kubelet[2736]: E1101 00:27:01.680865 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:01.682860 kubelet[2736]: E1101 00:27:01.682361 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:01.721432 kubelet[2736]: I1101 00:27:01.721361 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ghftw" podStartSLOduration=49.721327379 podStartE2EDuration="49.721327379s" podCreationTimestamp="2025-11-01 00:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:27:01.707127052 +0000 UTC m=+55.665235750" watchObservedRunningTime="2025-11-01 00:27:01.721327379 +0000 UTC m=+55.679436077" Nov 1 00:27:01.735180 systemd-networkd[1274]: cali25872cdd05b: Link UP Nov 1 00:27:01.735645 systemd-networkd[1274]: cali25872cdd05b: Gained carrier Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.645 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lhvvn-eth0 csi-node-driver- calico-system f97e1baa-80d7-4279-b761-fdf55a406885 1121 0 2025-11-01 00:26:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lhvvn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali25872cdd05b [] [] }} ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.645 [INFO][5161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.675 [INFO][5177] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" HandleID="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.676 [INFO][5177] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" HandleID="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lhvvn", "timestamp":"2025-11-01 00:27:01.675961893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.676 [INFO][5177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.676 [INFO][5177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.676 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.685 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.690 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.697 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.699 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.701 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.701 [INFO][5177] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.706 [INFO][5177] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80 Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.712 [INFO][5177] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.720 [INFO][5177] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.721 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" host="localhost" Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.722 [INFO][5177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:01.753487 containerd[1600]: 2025-11-01 00:27:01.722 [INFO][5177] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" HandleID="k8s-pod-network.df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.728 [INFO][5161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lhvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f97e1baa-80d7-4279-b761-fdf55a406885", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lhvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25872cdd05b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.728 [INFO][5161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.728 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25872cdd05b ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.736 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.739 [INFO][5161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lhvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f97e1baa-80d7-4279-b761-fdf55a406885", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80", Pod:"csi-node-driver-lhvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25872cdd05b", MAC:"b2:40:16:fc:ae:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:01.754467 containerd[1600]: 2025-11-01 00:27:01.748 [INFO][5161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80" Namespace="calico-system" Pod="csi-node-driver-lhvvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:01.771169 containerd[1600]: time="2025-11-01T00:27:01.770826750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:27:01.771169 containerd[1600]: time="2025-11-01T00:27:01.770886677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:27:01.771169 containerd[1600]: time="2025-11-01T00:27:01.770900693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:01.771169 containerd[1600]: time="2025-11-01T00:27:01.771123342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:27:01.801880 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:27:01.818969 containerd[1600]: time="2025-11-01T00:27:01.818924568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lhvvn,Uid:f97e1baa-80d7-4279-b761-fdf55a406885,Namespace:calico-system,Attempt:1,} returns sandbox id \"df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80\"" Nov 1 00:27:01.820868 containerd[1600]: time="2025-11-01T00:27:01.820831126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:02.166962 containerd[1600]: time="2025-11-01T00:27:02.166900428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:02.168059 containerd[1600]: time="2025-11-01T00:27:02.168018854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:02.168170 containerd[1600]: time="2025-11-01T00:27:02.168108316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:02.168348 kubelet[2736]: E1101 00:27:02.168287 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:02.168408 kubelet[2736]: E1101 00:27:02.168373 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:02.168622 kubelet[2736]: E1101 00:27:02.168565 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:02.170489 containerd[1600]: time="2025-11-01T00:27:02.170464069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:02.501790 containerd[1600]: time="2025-11-01T00:27:02.501630542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:02.502924 containerd[1600]: time="2025-11-01T00:27:02.502861315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:02.503051 containerd[1600]: time="2025-11-01T00:27:02.502969685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:02.503217 kubelet[2736]: E1101 00:27:02.503169 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:02.503280 kubelet[2736]: E1101 00:27:02.503235 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:02.503450 kubelet[2736]: E1101 00:27:02.503402 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:02.505276 kubelet[2736]: E1101 00:27:02.505239 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:02.683215 kubelet[2736]: E1101 00:27:02.683170 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:02.684853 kubelet[2736]: E1101 00:27:02.683816 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:02.684853 kubelet[2736]: E1101 00:27:02.684034 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:03.012606 systemd-networkd[1274]: cali25872cdd05b: Gained IPv6LL Nov 1 00:27:03.685274 kubelet[2736]: E1101 00:27:03.685217 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:03.686657 kubelet[2736]: E1101 00:27:03.686063 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:04.251360 containerd[1600]: time="2025-11-01T00:27:04.251280473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:27:04.623453 containerd[1600]: time="2025-11-01T00:27:04.623308511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:04.624880 containerd[1600]: time="2025-11-01T00:27:04.624826273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:27:04.624976 containerd[1600]: time="2025-11-01T00:27:04.624861500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:27:04.625145 kubelet[2736]: E1101 00:27:04.625083 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:04.625199 kubelet[2736]: E1101 00:27:04.625153 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:04.625305 kubelet[2736]: E1101 00:27:04.625265 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:846d76463aa0425393ff76a8db3a1708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:04.627922 containerd[1600]: time="2025-11-01T00:27:04.627565476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:27:04.714630 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:51216.service - OpenSSH per-connection server daemon (10.0.0.1:51216). Nov 1 00:27:04.748559 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:04.750536 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:04.754656 systemd-logind[1574]: New session 13 of user core. Nov 1 00:27:04.762633 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:27:04.911252 sshd[5242]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:04.921670 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:51232.service - OpenSSH per-connection server daemon (10.0.0.1:51232). Nov 1 00:27:04.922198 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:51216.service: Deactivated successfully. Nov 1 00:27:04.925355 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:27:04.927628 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:27:04.928576 systemd-logind[1574]: Removed session 13. Nov 1 00:27:04.953982 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 51232 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:04.955706 sshd[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:04.959913 systemd-logind[1574]: New session 14 of user core. Nov 1 00:27:04.964987 containerd[1600]: time="2025-11-01T00:27:04.964946902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:04.971616 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:27:05.027068 containerd[1600]: time="2025-11-01T00:27:05.026995827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:27:05.027282 containerd[1600]: time="2025-11-01T00:27:05.027035403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:05.027360 kubelet[2736]: E1101 00:27:05.027283 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:05.027886 kubelet[2736]: E1101 00:27:05.027362 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:05.027886 kubelet[2736]: E1101 00:27:05.027509 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:05.029279 kubelet[2736]: E1101 00:27:05.029090 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:27:05.122264 sshd[5255]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:05.133196 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:51248.service - OpenSSH per-connection server daemon (10.0.0.1:51248). Nov 1 00:27:05.137958 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:51232.service: Deactivated successfully. Nov 1 00:27:05.142438 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:27:05.147908 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:27:05.149458 systemd-logind[1574]: Removed session 14. Nov 1 00:27:05.173146 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 51248 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:05.174879 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:05.179089 systemd-logind[1574]: New session 15 of user core. Nov 1 00:27:05.189601 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:27:05.314508 sshd[5269]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:05.318611 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:51248.service: Deactivated successfully. Nov 1 00:27:05.321148 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:27:05.321246 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:27:05.322508 systemd-logind[1574]: Removed session 15. Nov 1 00:27:06.229557 containerd[1600]: time="2025-11-01T00:27:06.229506771Z" level=info msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.269 [WARNING][5297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vv2f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e7c7365e-fed9-44a2-bb07-9942249f952b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2", Pod:"goldmane-666569f655-vv2f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a249a30370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.269 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.269 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" iface="eth0" netns="" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.269 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.269 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.294 [INFO][5308] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.294 [INFO][5308] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.294 [INFO][5308] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.303 [WARNING][5308] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.303 [INFO][5308] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.304 [INFO][5308] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.310474 containerd[1600]: 2025-11-01 00:27:06.307 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.311017 containerd[1600]: time="2025-11-01T00:27:06.310536916Z" level=info msg="TearDown network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" successfully" Nov 1 00:27:06.311017 containerd[1600]: time="2025-11-01T00:27:06.310568999Z" level=info msg="StopPodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" returns successfully" Nov 1 00:27:06.312727 containerd[1600]: time="2025-11-01T00:27:06.312694475Z" level=info msg="RemovePodSandbox for \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" Nov 1 00:27:06.315436 containerd[1600]: time="2025-11-01T00:27:06.315390880Z" level=info msg="Forcibly stopping sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\"" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.348 [WARNING][5325] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vv2f4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e7c7365e-fed9-44a2-bb07-9942249f952b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a5865d6d80a7d8aa49f09bf56ccdf6d04f4b4efdd72cc55287746b82521f3f2", Pod:"goldmane-666569f655-vv2f4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8a249a30370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.349 [INFO][5325] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.349 [INFO][5325] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" iface="eth0" netns="" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.349 [INFO][5325] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.349 [INFO][5325] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.370 [INFO][5334] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.370 [INFO][5334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.370 [INFO][5334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.375 [WARNING][5334] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.375 [INFO][5334] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" HandleID="k8s-pod-network.3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Workload="localhost-k8s-goldmane--666569f655--vv2f4-eth0" Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.377 [INFO][5334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.382596 containerd[1600]: 2025-11-01 00:27:06.379 [INFO][5325] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6" Nov 1 00:27:06.383073 containerd[1600]: time="2025-11-01T00:27:06.382643586Z" level=info msg="TearDown network for sandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" successfully" Nov 1 00:27:06.399156 containerd[1600]: time="2025-11-01T00:27:06.399086230Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:06.399156 containerd[1600]: time="2025-11-01T00:27:06.399156004Z" level=info msg="RemovePodSandbox \"3e8e1299e481b266a7267dc9c07a916e8f5397269c5cb086741732cc919757d6\" returns successfully" Nov 1 00:27:06.399928 containerd[1600]: time="2025-11-01T00:27:06.399874485Z" level=info msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.434 [WARNING][5351] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30524ccc-9256-4e38-a18e-44025e0e57e8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043", Pod:"coredns-668d6bf9bc-qjdb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d16da40201", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.435 [INFO][5351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.435 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" iface="eth0" netns="" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.435 [INFO][5351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.435 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.459 [INFO][5360] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.459 [INFO][5360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.459 [INFO][5360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.463 [WARNING][5360] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.464 [INFO][5360] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.465 [INFO][5360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.470792 containerd[1600]: 2025-11-01 00:27:06.467 [INFO][5351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.471389 containerd[1600]: time="2025-11-01T00:27:06.470814130Z" level=info msg="TearDown network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" successfully" Nov 1 00:27:06.471389 containerd[1600]: time="2025-11-01T00:27:06.470841363Z" level=info msg="StopPodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" returns successfully" Nov 1 00:27:06.471479 containerd[1600]: time="2025-11-01T00:27:06.471438020Z" level=info msg="RemovePodSandbox for \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" Nov 1 00:27:06.471518 containerd[1600]: time="2025-11-01T00:27:06.471478288Z" level=info msg="Forcibly stopping sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\"" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.507 [WARNING][5378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"30524ccc-9256-4e38-a18e-44025e0e57e8", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fdae7f4bc14c1f6f94813a1e719d33d62c6fdfe0bd5f4f3f6d31be34e3e8043", Pod:"coredns-668d6bf9bc-qjdb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d16da40201", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.507 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.507 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" iface="eth0" netns="" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.507 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.507 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.531 [INFO][5386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.532 [INFO][5386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.532 [INFO][5386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.537 [WARNING][5386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.537 [INFO][5386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" HandleID="k8s-pod-network.2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Workload="localhost-k8s-coredns--668d6bf9bc--qjdb4-eth0" Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.538 [INFO][5386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.544450 containerd[1600]: 2025-11-01 00:27:06.541 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a" Nov 1 00:27:06.544450 containerd[1600]: time="2025-11-01T00:27:06.544304120Z" level=info msg="TearDown network for sandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" successfully" Nov 1 00:27:06.548978 containerd[1600]: time="2025-11-01T00:27:06.548933951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:06.549100 containerd[1600]: time="2025-11-01T00:27:06.548996471Z" level=info msg="RemovePodSandbox \"2f35bfbee96413c49424325656ce017a3790fa12568247e71b5af3b59f66f38a\" returns successfully" Nov 1 00:27:06.549709 containerd[1600]: time="2025-11-01T00:27:06.549655008Z" level=info msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.585 [WARNING][5404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0", GenerateName:"calico-kube-controllers-77dccc7d57-", Namespace:"calico-system", SelfLink:"", UID:"26210ce5-453a-47fe-b5c4-bb7d1e50d30b", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77dccc7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4", Pod:"calico-kube-controllers-77dccc7d57-zdj9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936739c8dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.585 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.585 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" iface="eth0" netns="" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.585 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.585 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.608 [INFO][5413] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.609 [INFO][5413] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.609 [INFO][5413] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.614 [WARNING][5413] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.615 [INFO][5413] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.616 [INFO][5413] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.621578 containerd[1600]: 2025-11-01 00:27:06.619 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.622027 containerd[1600]: time="2025-11-01T00:27:06.621662055Z" level=info msg="TearDown network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" successfully" Nov 1 00:27:06.622027 containerd[1600]: time="2025-11-01T00:27:06.621692043Z" level=info msg="StopPodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" returns successfully" Nov 1 00:27:06.622284 containerd[1600]: time="2025-11-01T00:27:06.622253852Z" level=info msg="RemovePodSandbox for \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" Nov 1 00:27:06.622318 containerd[1600]: time="2025-11-01T00:27:06.622284552Z" level=info msg="Forcibly stopping sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\"" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.659 [WARNING][5431] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0", GenerateName:"calico-kube-controllers-77dccc7d57-", Namespace:"calico-system", SelfLink:"", UID:"26210ce5-453a-47fe-b5c4-bb7d1e50d30b", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77dccc7d57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96fe11e60c0d4a06161428478c8edade911442694141c26c394fd10def68d7b4", Pod:"calico-kube-controllers-77dccc7d57-zdj9g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936739c8dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.659 [INFO][5431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.659 [INFO][5431] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" iface="eth0" netns="" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.659 [INFO][5431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.659 [INFO][5431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.682 [INFO][5439] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.683 [INFO][5439] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.683 [INFO][5439] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.689 [WARNING][5439] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.689 [INFO][5439] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" HandleID="k8s-pod-network.b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Workload="localhost-k8s-calico--kube--controllers--77dccc7d57--zdj9g-eth0" Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.691 [INFO][5439] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.697825 containerd[1600]: 2025-11-01 00:27:06.694 [INFO][5431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8" Nov 1 00:27:06.698283 containerd[1600]: time="2025-11-01T00:27:06.697877684Z" level=info msg="TearDown network for sandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" successfully" Nov 1 00:27:06.702711 containerd[1600]: time="2025-11-01T00:27:06.702660580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:06.702711 containerd[1600]: time="2025-11-01T00:27:06.702720856Z" level=info msg="RemovePodSandbox \"b12dbfa4ec996d7f2b581a928f592de6922d67d75f49b6b5b5ccd58ba4300bb8\" returns successfully" Nov 1 00:27:06.703251 containerd[1600]: time="2025-11-01T00:27:06.703201941Z" level=info msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.745 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"18da393d-9f84-487e-a8ed-8cbdbb46de00", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c", Pod:"calico-apiserver-599794c67d-gvjds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6ca2cb91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.746 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.746 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" iface="eth0" netns="" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.746 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.746 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.809 [INFO][5465] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.810 [INFO][5465] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.810 [INFO][5465] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.815 [WARNING][5465] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.815 [INFO][5465] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.816 [INFO][5465] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.822166 containerd[1600]: 2025-11-01 00:27:06.819 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.822166 containerd[1600]: time="2025-11-01T00:27:06.822146655Z" level=info msg="TearDown network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" successfully" Nov 1 00:27:06.822960 containerd[1600]: time="2025-11-01T00:27:06.822179088Z" level=info msg="StopPodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" returns successfully" Nov 1 00:27:06.822960 containerd[1600]: time="2025-11-01T00:27:06.822779241Z" level=info msg="RemovePodSandbox for \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" Nov 1 00:27:06.822960 containerd[1600]: time="2025-11-01T00:27:06.822816052Z" level=info msg="Forcibly stopping sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\"" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.863 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"18da393d-9f84-487e-a8ed-8cbdbb46de00", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030486d3a2f19d82dea80420c09181d673cf3d2840d6656b907b7814f6cdae5c", Pod:"calico-apiserver-599794c67d-gvjds", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f6ca2cb91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.863 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.863 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" iface="eth0" netns="" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.863 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.863 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.885 [INFO][5494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.885 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.885 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.890 [WARNING][5494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.890 [INFO][5494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" HandleID="k8s-pod-network.388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Workload="localhost-k8s-calico--apiserver--599794c67d--gvjds-eth0" Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.891 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:06.897490 containerd[1600]: 2025-11-01 00:27:06.894 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a" Nov 1 00:27:06.897941 containerd[1600]: time="2025-11-01T00:27:06.897539182Z" level=info msg="TearDown network for sandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" successfully" Nov 1 00:27:07.115575 containerd[1600]: time="2025-11-01T00:27:07.115526827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:07.115741 containerd[1600]: time="2025-11-01T00:27:07.115597452Z" level=info msg="RemovePodSandbox \"388145b314a808cad8b878a7ebaed44a087875026bf639132fc6ecfca02c210a\" returns successfully" Nov 1 00:27:07.116211 containerd[1600]: time="2025-11-01T00:27:07.116181584Z" level=info msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.153 [WARNING][5512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ghftw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f73971-767b-4aac-baa4-25b13c4b42ec", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d", Pod:"coredns-668d6bf9bc-ghftw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida0d58d341d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.153 [INFO][5512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.153 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" iface="eth0" netns="" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.153 [INFO][5512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.153 [INFO][5512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.176 [INFO][5520] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.176 [INFO][5520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.176 [INFO][5520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.182 [WARNING][5520] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.182 [INFO][5520] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.184 [INFO][5520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.189240 containerd[1600]: 2025-11-01 00:27:07.186 [INFO][5512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.189823 containerd[1600]: time="2025-11-01T00:27:07.189292596Z" level=info msg="TearDown network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" successfully" Nov 1 00:27:07.189823 containerd[1600]: time="2025-11-01T00:27:07.189330600Z" level=info msg="StopPodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" returns successfully" Nov 1 00:27:07.189940 containerd[1600]: time="2025-11-01T00:27:07.189894794Z" level=info msg="RemovePodSandbox for \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" Nov 1 00:27:07.189977 containerd[1600]: time="2025-11-01T00:27:07.189945080Z" level=info msg="Forcibly stopping sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\"" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.223 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ghftw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29f73971-767b-4aac-baa4-25b13c4b42ec", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5bcbe1c64bc5d42dde17b30cf8be4b6c924703a7fdec0abbe3fee5ad1cc23b5d", Pod:"coredns-668d6bf9bc-ghftw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida0d58d341d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.223 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.223 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" iface="eth0" netns="" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.223 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.224 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.245 [INFO][5546] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.245 [INFO][5546] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.245 [INFO][5546] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.252 [WARNING][5546] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.252 [INFO][5546] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" HandleID="k8s-pod-network.6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Workload="localhost-k8s-coredns--668d6bf9bc--ghftw-eth0" Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.253 [INFO][5546] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.259300 containerd[1600]: 2025-11-01 00:27:07.256 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784" Nov 1 00:27:07.260141 containerd[1600]: time="2025-11-01T00:27:07.259366773Z" level=info msg="TearDown network for sandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" successfully" Nov 1 00:27:07.405014 containerd[1600]: time="2025-11-01T00:27:07.404860558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:07.405014 containerd[1600]: time="2025-11-01T00:27:07.404966089Z" level=info msg="RemovePodSandbox \"6b13a12c289c378b08e715aceb009bc211232022b847f87c796be2caf46b2784\" returns successfully" Nov 1 00:27:07.405516 containerd[1600]: time="2025-11-01T00:27:07.405482792Z" level=info msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.442 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lhvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f97e1baa-80d7-4279-b761-fdf55a406885", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80", Pod:"csi-node-driver-lhvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25872cdd05b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.442 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.442 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" iface="eth0" netns="" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.442 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.442 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.466 [INFO][5573] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.466 [INFO][5573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.466 [INFO][5573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.472 [WARNING][5573] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.472 [INFO][5573] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.474 [INFO][5573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.479727 containerd[1600]: 2025-11-01 00:27:07.476 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.480159 containerd[1600]: time="2025-11-01T00:27:07.479782148Z" level=info msg="TearDown network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" successfully" Nov 1 00:27:07.480159 containerd[1600]: time="2025-11-01T00:27:07.479811273Z" level=info msg="StopPodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" returns successfully" Nov 1 00:27:07.480533 containerd[1600]: time="2025-11-01T00:27:07.480491962Z" level=info msg="RemovePodSandbox for \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" Nov 1 00:27:07.480594 containerd[1600]: time="2025-11-01T00:27:07.480545695Z" level=info msg="Forcibly stopping sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\"" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.513 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lhvvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f97e1baa-80d7-4279-b761-fdf55a406885", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df93f79b91b221fb1eb3658e1abce8f0d2b37ac179b166347b108d29d05ebc80", Pod:"csi-node-driver-lhvvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali25872cdd05b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.513 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.513 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" iface="eth0" netns="" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.513 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.513 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.540 [INFO][5599] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.540 [INFO][5599] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.540 [INFO][5599] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.545 [WARNING][5599] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.545 [INFO][5599] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" HandleID="k8s-pod-network.10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Workload="localhost-k8s-csi--node--driver--lhvvn-eth0" Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.547 [INFO][5599] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.553350 containerd[1600]: 2025-11-01 00:27:07.550 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451" Nov 1 00:27:07.553854 containerd[1600]: time="2025-11-01T00:27:07.553413219Z" level=info msg="TearDown network for sandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" successfully" Nov 1 00:27:07.557928 containerd[1600]: time="2025-11-01T00:27:07.557900401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:07.557985 containerd[1600]: time="2025-11-01T00:27:07.557950517Z" level=info msg="RemovePodSandbox \"10f759b075e89b292b6c1a00921e27063a7939e416a149484b2493d333b9c451\" returns successfully" Nov 1 00:27:07.558536 containerd[1600]: time="2025-11-01T00:27:07.558497168Z" level=info msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.594 [WARNING][5616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"151d3855-7594-4722-a64f-ba8ae7061d01", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88", Pod:"calico-apiserver-599794c67d-92gsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie56fabf58e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.595 [INFO][5616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.595 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" iface="eth0" netns="" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.595 [INFO][5616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.595 [INFO][5616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.623 [INFO][5625] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.623 [INFO][5625] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.624 [INFO][5625] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.631 [WARNING][5625] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.631 [INFO][5625] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.633 [INFO][5625] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.641741 containerd[1600]: 2025-11-01 00:27:07.638 [INFO][5616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.642204 containerd[1600]: time="2025-11-01T00:27:07.641789853Z" level=info msg="TearDown network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" successfully" Nov 1 00:27:07.642204 containerd[1600]: time="2025-11-01T00:27:07.641816254Z" level=info msg="StopPodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" returns successfully" Nov 1 00:27:07.642284 containerd[1600]: time="2025-11-01T00:27:07.642261119Z" level=info msg="RemovePodSandbox for \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" Nov 1 00:27:07.642367 containerd[1600]: time="2025-11-01T00:27:07.642289443Z" level=info msg="Forcibly stopping sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\"" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.677 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0", GenerateName:"calico-apiserver-599794c67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"151d3855-7594-4722-a64f-ba8ae7061d01", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"599794c67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7556549078363774d958860e84ea2252abec04919d07acaeffe29bc5735ee88", Pod:"calico-apiserver-599794c67d-92gsc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie56fabf58e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.677 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.677 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" iface="eth0" netns="" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.677 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.677 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.700 [INFO][5654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.700 [INFO][5654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.700 [INFO][5654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.706 [WARNING][5654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.706 [INFO][5654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" HandleID="k8s-pod-network.ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Workload="localhost-k8s-calico--apiserver--599794c67d--92gsc-eth0" Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.707 [INFO][5654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:07.713324 containerd[1600]: 2025-11-01 00:27:07.710 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68" Nov 1 00:27:07.713851 containerd[1600]: time="2025-11-01T00:27:07.713292946Z" level=info msg="TearDown network for sandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" successfully" Nov 1 00:27:07.974840 containerd[1600]: time="2025-11-01T00:27:07.974666150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:07.974840 containerd[1600]: time="2025-11-01T00:27:07.974778546Z" level=info msg="RemovePodSandbox \"ffcc2b8cd77212c38188e4bd04faac93dbfcf830be4dbde8a46fc0c9129afd68\" returns successfully" Nov 1 00:27:07.975547 containerd[1600]: time="2025-11-01T00:27:07.975496967Z" level=info msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.014 [WARNING][5671] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" WorkloadEndpoint="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.014 [INFO][5671] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.014 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" iface="eth0" netns="" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.014 [INFO][5671] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.014 [INFO][5671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.048 [INFO][5680] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.048 [INFO][5680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.048 [INFO][5680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.057 [WARNING][5680] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.057 [INFO][5680] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.059 [INFO][5680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:08.065527 containerd[1600]: 2025-11-01 00:27:08.062 [INFO][5671] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.065912 containerd[1600]: time="2025-11-01T00:27:08.065573252Z" level=info msg="TearDown network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" successfully" Nov 1 00:27:08.065912 containerd[1600]: time="2025-11-01T00:27:08.065601866Z" level=info msg="StopPodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" returns successfully" Nov 1 00:27:08.066087 containerd[1600]: time="2025-11-01T00:27:08.066062872Z" level=info msg="RemovePodSandbox for \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" Nov 1 00:27:08.066120 containerd[1600]: time="2025-11-01T00:27:08.066092608Z" level=info msg="Forcibly stopping sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\"" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.101 [WARNING][5699] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" WorkloadEndpoint="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.102 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.102 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" iface="eth0" netns="" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.102 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.102 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.120 [INFO][5707] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.120 [INFO][5707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.120 [INFO][5707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.125 [WARNING][5707] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.125 [INFO][5707] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" HandleID="k8s-pod-network.284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Workload="localhost-k8s-whisker--5fc5d9467--nfltp-eth0" Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.127 [INFO][5707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:27:08.132706 containerd[1600]: 2025-11-01 00:27:08.129 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e" Nov 1 00:27:08.133099 containerd[1600]: time="2025-11-01T00:27:08.132763287Z" level=info msg="TearDown network for sandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" successfully" Nov 1 00:27:08.136755 containerd[1600]: time="2025-11-01T00:27:08.136710658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:27:08.136828 containerd[1600]: time="2025-11-01T00:27:08.136794369Z" level=info msg="RemovePodSandbox \"284498d8188f63ed592bb8324d45c778d32d21ffaf5b135e0ed2f467ab27433e\" returns successfully" Nov 1 00:27:09.251086 containerd[1600]: time="2025-11-01T00:27:09.250522365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:27:09.558252 containerd[1600]: time="2025-11-01T00:27:09.558050256Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:09.559397 containerd[1600]: time="2025-11-01T00:27:09.559346985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:27:09.559542 containerd[1600]: time="2025-11-01T00:27:09.559450994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:09.559689 kubelet[2736]: E1101 00:27:09.559631 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:09.560162 kubelet[2736]: E1101 00:27:09.559699 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:09.560162 kubelet[2736]: E1101 00:27:09.560015 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxtr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77dccc7d57-zdj9g_calico-system(26210ce5-453a-47fe-b5c4-bb7d1e50d30b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:09.560300 containerd[1600]: time="2025-11-01T00:27:09.560113436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:27:09.561751 kubelet[2736]: E1101 00:27:09.561700 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:27:09.876600 containerd[1600]: time="2025-11-01T00:27:09.876545536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:09.941004 containerd[1600]: time="2025-11-01T00:27:09.940905505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:09.941004 containerd[1600]: time="2025-11-01T00:27:09.940945502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:27:09.941305 kubelet[2736]: E1101 00:27:09.941242 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:09.941376 kubelet[2736]: E1101 00:27:09.941313 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:09.941588 kubelet[2736]: E1101 00:27:09.941522 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56b8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vv2f4_calico-system(e7c7365e-fed9-44a2-bb07-9942249f952b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:09.942741 kubelet[2736]: E1101 00:27:09.942696 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:27:10.330706 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:53126.service - OpenSSH per-connection server daemon (10.0.0.1:53126). Nov 1 00:27:10.363923 sshd[5715]: Accepted publickey for core from 10.0.0.1 port 53126 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:10.365826 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:10.369863 systemd-logind[1574]: New session 16 of user core. Nov 1 00:27:10.378626 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:27:10.501596 sshd[5715]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:10.506147 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:53126.service: Deactivated successfully. Nov 1 00:27:10.508513 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:27:10.508602 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:27:10.509594 systemd-logind[1574]: Removed session 16. Nov 1 00:27:12.249978 containerd[1600]: time="2025-11-01T00:27:12.249910470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:12.589877 containerd[1600]: time="2025-11-01T00:27:12.589662591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:12.591062 containerd[1600]: time="2025-11-01T00:27:12.591019561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:12.591211 containerd[1600]: time="2025-11-01T00:27:12.591127749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:12.591395 kubelet[2736]: E1101 00:27:12.591302 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:12.591943 kubelet[2736]: E1101 00:27:12.591409 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:12.591943 kubelet[2736]: E1101 00:27:12.591612 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mp69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-92gsc_calico-apiserver(151d3855-7594-4722-a64f-ba8ae7061d01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:12.593491 kubelet[2736]: E1101 00:27:12.593446 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:27:15.250626 containerd[1600]: time="2025-11-01T00:27:15.250281782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:15.517654 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). Nov 1 00:27:15.546968 sshd[5740]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:15.548682 sshd[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:15.553498 systemd-logind[1574]: New session 17 of user core. Nov 1 00:27:15.563628 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:27:15.685769 containerd[1600]: time="2025-11-01T00:27:15.685692751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:15.687810 sshd[5740]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:15.692217 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:53132.service: Deactivated successfully. Nov 1 00:27:15.695153 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:27:15.695159 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:27:15.696494 systemd-logind[1574]: Removed session 17. Nov 1 00:27:15.762850 containerd[1600]: time="2025-11-01T00:27:15.762762291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:15.763046 containerd[1600]: time="2025-11-01T00:27:15.762781267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:15.763155 kubelet[2736]: E1101 00:27:15.763096 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:15.763692 kubelet[2736]: E1101 00:27:15.763164 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:15.763692 kubelet[2736]: E1101 00:27:15.763457 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgjns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-gvjds_calico-apiserver(18da393d-9f84-487e-a8ed-8cbdbb46de00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:15.765045 containerd[1600]: time="2025-11-01T00:27:15.763925376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:15.765469 kubelet[2736]: E1101 00:27:15.765390 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:16.087719 containerd[1600]: time="2025-11-01T00:27:16.087646333Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:16.089015 containerd[1600]: time="2025-11-01T00:27:16.088970134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:16.089115 containerd[1600]: time="2025-11-01T00:27:16.089055247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:16.089252 kubelet[2736]: E1101 00:27:16.089199 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:16.089318 kubelet[2736]: E1101 00:27:16.089266 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:16.089526 kubelet[2736]: E1101 00:27:16.089454 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:16.091735 containerd[1600]: time="2025-11-01T00:27:16.091685707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:16.452510 containerd[1600]: time="2025-11-01T00:27:16.452435435Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:16.453693 containerd[1600]: time="2025-11-01T00:27:16.453645398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:16.453821 containerd[1600]: time="2025-11-01T00:27:16.453747253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:16.454013 kubelet[2736]: E1101 00:27:16.453950 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:16.454092 kubelet[2736]: E1101 00:27:16.454022 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:16.454192 kubelet[2736]: E1101 00:27:16.454151 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:16.455429 kubelet[2736]: E1101 00:27:16.455380 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:18.250764 kubelet[2736]: E1101 00:27:18.250690 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:27:20.699709 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:52260.service - OpenSSH per-connection server daemon (10.0.0.1:52260). Nov 1 00:27:20.732225 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 52260 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:20.734033 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:20.738055 systemd-logind[1574]: New session 18 of user core. Nov 1 00:27:20.743619 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:27:20.865527 sshd[5755]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:20.870053 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:52260.service: Deactivated successfully. Nov 1 00:27:20.872941 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:27:20.873056 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:27:20.874422 systemd-logind[1574]: Removed session 18. Nov 1 00:27:21.249026 kubelet[2736]: E1101 00:27:21.248980 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:21.249646 kubelet[2736]: E1101 00:27:21.249610 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:27:22.249754 kubelet[2736]: E1101 00:27:22.249690 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:27:25.249792 kubelet[2736]: E1101 00:27:25.249704 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:27:25.878714 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:52268.service - OpenSSH per-connection server daemon (10.0.0.1:52268). Nov 1 00:27:25.926458 sshd[5795]: Accepted publickey for core from 10.0.0.1 port 52268 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:25.928945 sshd[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:25.934492 systemd-logind[1574]: New session 19 of user core. Nov 1 00:27:25.943871 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:27:26.141411 sshd[5795]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:26.158821 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:39714.service - OpenSSH per-connection server daemon (10.0.0.1:39714). Nov 1 00:27:26.159810 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:52268.service: Deactivated successfully. Nov 1 00:27:26.165620 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:27:26.169228 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:27:26.173921 systemd-logind[1574]: Removed session 19. Nov 1 00:27:26.204412 sshd[5809]: Accepted publickey for core from 10.0.0.1 port 39714 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:26.207120 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:26.216580 systemd-logind[1574]: New session 20 of user core. Nov 1 00:27:26.228123 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:27:26.609814 sshd[5809]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:26.618600 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:39724.service - OpenSSH per-connection server daemon (10.0.0.1:39724). Nov 1 00:27:26.619223 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:39714.service: Deactivated successfully. Nov 1 00:27:26.624951 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:27:26.625541 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:27:26.626936 systemd-logind[1574]: Removed session 20. Nov 1 00:27:26.658442 sshd[5822]: Accepted publickey for core from 10.0.0.1 port 39724 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:26.660537 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:26.665537 systemd-logind[1574]: New session 21 of user core. Nov 1 00:27:26.674765 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:27:27.196215 sshd[5822]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:27.205713 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:39734.service - OpenSSH per-connection server daemon (10.0.0.1:39734). Nov 1 00:27:27.206294 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:39724.service: Deactivated successfully. Nov 1 00:27:27.218137 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:27:27.226366 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:27:27.229035 systemd-logind[1574]: Removed session 21. Nov 1 00:27:27.249971 kubelet[2736]: E1101 00:27:27.249906 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:27.253884 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 39734 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:27.256354 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:27.263175 systemd-logind[1574]: New session 22 of user core. Nov 1 00:27:27.272694 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:27:27.513578 sshd[5841]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:27.526102 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:39748.service - OpenSSH per-connection server daemon (10.0.0.1:39748). Nov 1 00:27:27.526859 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:39734.service: Deactivated successfully. Nov 1 00:27:27.529621 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:27:27.531006 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:27:27.533637 systemd-logind[1574]: Removed session 22. Nov 1 00:27:27.557627 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 39748 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:27.559544 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:27.563955 systemd-logind[1574]: New session 23 of user core. Nov 1 00:27:27.576831 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:27:27.710406 sshd[5856]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:27.715467 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:39748.service: Deactivated successfully. Nov 1 00:27:27.719172 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:27:27.720115 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:27:27.721420 systemd-logind[1574]: Removed session 23. Nov 1 00:27:30.249141 kubelet[2736]: E1101 00:27:30.249066 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:31.250717 containerd[1600]: time="2025-11-01T00:27:31.250524146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:27:31.252802 kubelet[2736]: E1101 00:27:31.251892 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:31.583060 containerd[1600]: time="2025-11-01T00:27:31.582911968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:31.691990 containerd[1600]: time="2025-11-01T00:27:31.691874792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:27:31.691990 containerd[1600]: time="2025-11-01T00:27:31.691927773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:27:31.692284 kubelet[2736]: E1101 00:27:31.692213 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:31.692391 kubelet[2736]: E1101 00:27:31.692300 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:27:31.692538 kubelet[2736]: E1101 00:27:31.692484 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:846d76463aa0425393ff76a8db3a1708,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:31.694490 containerd[1600]: time="2025-11-01T00:27:31.694432706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:27:32.080599 containerd[1600]: time="2025-11-01T00:27:32.080507051Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:32.082998 containerd[1600]: time="2025-11-01T00:27:32.082946006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:27:32.083204 containerd[1600]: time="2025-11-01T00:27:32.082982305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:32.083547 kubelet[2736]: E1101 00:27:32.083459 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:32.083547 kubelet[2736]: E1101 00:27:32.083541 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:27:32.083780 kubelet[2736]: E1101 00:27:32.083708 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-97c27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55d87fdf9f-wx6gc_calico-system(65dc1f21-74ec-412a-ad9c-6e2587acdbb7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:32.084968 kubelet[2736]: E1101 00:27:32.084902 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:27:32.724566 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758). Nov 1 00:27:32.758790 sshd[5880]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:32.760942 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:32.766388 systemd-logind[1574]: New session 24 of user core. Nov 1 00:27:32.772768 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:27:32.906368 sshd[5880]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:32.910909 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:39758.service: Deactivated successfully. Nov 1 00:27:32.913248 systemd-logind[1574]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:27:32.913373 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:27:32.914314 systemd-logind[1574]: Removed session 24. Nov 1 00:27:33.253453 containerd[1600]: time="2025-11-01T00:27:33.252984314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:27:33.615069 containerd[1600]: time="2025-11-01T00:27:33.614994563Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:33.623073 containerd[1600]: time="2025-11-01T00:27:33.622960472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:27:33.623297 containerd[1600]: time="2025-11-01T00:27:33.623055443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:33.623503 kubelet[2736]: E1101 00:27:33.623369 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:33.623503 kubelet[2736]: E1101 00:27:33.623451 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:27:33.624074 kubelet[2736]: E1101 00:27:33.623694 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56b8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vv2f4_calico-system(e7c7365e-fed9-44a2-bb07-9942249f952b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:33.628475 kubelet[2736]: E1101 00:27:33.626906 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:27:35.249816 containerd[1600]: time="2025-11-01T00:27:35.249763585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:27:35.561062 containerd[1600]: time="2025-11-01T00:27:35.560855864Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:35.562152 containerd[1600]: time="2025-11-01T00:27:35.562103093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:27:35.562257 containerd[1600]: time="2025-11-01T00:27:35.562162266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:27:35.562475 kubelet[2736]: E1101 00:27:35.562417 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:35.562947 kubelet[2736]: E1101 00:27:35.562492 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:27:35.562947 kubelet[2736]: E1101 00:27:35.562694 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxtr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77dccc7d57-zdj9g_calico-system(26210ce5-453a-47fe-b5c4-bb7d1e50d30b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:35.563938 kubelet[2736]: E1101 00:27:35.563889 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77dccc7d57-zdj9g" podUID="26210ce5-453a-47fe-b5c4-bb7d1e50d30b" Nov 1 00:27:37.248901 kubelet[2736]: E1101 00:27:37.248854 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:37.250555 containerd[1600]: time="2025-11-01T00:27:37.249995328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:37.561303 containerd[1600]: time="2025-11-01T00:27:37.561111357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:37.562480 containerd[1600]: time="2025-11-01T00:27:37.562431775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:37.562547 containerd[1600]: time="2025-11-01T00:27:37.562497058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:37.562738 kubelet[2736]: E1101 00:27:37.562691 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:37.562822 kubelet[2736]: E1101 00:27:37.562755 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:37.562985 kubelet[2736]: E1101 00:27:37.562926 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mp69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-92gsc_calico-apiserver(151d3855-7594-4722-a64f-ba8ae7061d01): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:37.564121 kubelet[2736]: E1101 00:27:37.564091 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01" Nov 1 00:27:37.920725 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:47060.service - OpenSSH per-connection server daemon (10.0.0.1:47060). Nov 1 00:27:37.950962 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 47060 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:37.953104 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:37.957783 systemd-logind[1574]: New session 25 of user core. Nov 1 00:27:37.967622 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:27:38.081410 sshd[5897]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:38.086924 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:47060.service: Deactivated successfully. Nov 1 00:27:38.091119 systemd-logind[1574]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:27:38.091237 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:27:38.092590 systemd-logind[1574]: Removed session 25. Nov 1 00:27:39.249909 containerd[1600]: time="2025-11-01T00:27:39.249841688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:27:39.584097 containerd[1600]: time="2025-11-01T00:27:39.583916507Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:39.586673 containerd[1600]: time="2025-11-01T00:27:39.586607894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:27:39.586801 containerd[1600]: time="2025-11-01T00:27:39.586729445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:27:39.586960 kubelet[2736]: E1101 00:27:39.586899 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:39.587528 kubelet[2736]: E1101 00:27:39.586972 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:27:39.587528 kubelet[2736]: E1101 00:27:39.587139 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgjns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-599794c67d-gvjds_calico-apiserver(18da393d-9f84-487e-a8ed-8cbdbb46de00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:39.588463 kubelet[2736]: E1101 00:27:39.588409 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:42.248847 kubelet[2736]: E1101 00:27:42.248499 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:43.091633 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:47064.service - OpenSSH per-connection server daemon (10.0.0.1:47064). Nov 1 00:27:43.127144 sshd[5915]: Accepted publickey for core from 10.0.0.1 port 47064 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:43.128434 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:43.133041 systemd-logind[1574]: New session 26 of user core. Nov 1 00:27:43.140810 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:27:43.270278 sshd[5915]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:43.276088 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:47064.service: Deactivated successfully. Nov 1 00:27:43.281416 systemd-logind[1574]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:27:43.281674 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:27:43.283303 systemd-logind[1574]: Removed session 26. Nov 1 00:27:44.249328 kubelet[2736]: E1101 00:27:44.249246 2736 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:27:44.250565 kubelet[2736]: E1101 00:27:44.249918 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vv2f4" podUID="e7c7365e-fed9-44a2-bb07-9942249f952b" Nov 1 00:27:45.250979 kubelet[2736]: E1101 00:27:45.250899 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55d87fdf9f-wx6gc" podUID="65dc1f21-74ec-412a-ad9c-6e2587acdbb7" Nov 1 00:27:46.250801 containerd[1600]: time="2025-11-01T00:27:46.250494842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:27:46.569404 containerd[1600]: time="2025-11-01T00:27:46.569217940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:46.570579 containerd[1600]: time="2025-11-01T00:27:46.570520487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:27:46.570748 containerd[1600]: time="2025-11-01T00:27:46.570559852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:27:46.570847 kubelet[2736]: E1101 00:27:46.570797 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:46.571236 kubelet[2736]: E1101 00:27:46.570861 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:27:46.571236 kubelet[2736]: E1101 00:27:46.571000 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:46.573700 containerd[1600]: time="2025-11-01T00:27:46.573671558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:27:46.894204 containerd[1600]: time="2025-11-01T00:27:46.894123672Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:27:46.895849 containerd[1600]: time="2025-11-01T00:27:46.895795300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:27:46.896102 containerd[1600]: time="2025-11-01T00:27:46.895904126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:27:46.896147 kubelet[2736]: E1101 00:27:46.896064 2736 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:46.896147 kubelet[2736]: E1101 00:27:46.896135 2736 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:27:46.896350 kubelet[2736]: E1101 00:27:46.896282 2736 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lhvvn_calico-system(f97e1baa-80d7-4279-b761-fdf55a406885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:27:46.897590 kubelet[2736]: E1101 00:27:46.897539 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lhvvn" podUID="f97e1baa-80d7-4279-b761-fdf55a406885" Nov 1 00:27:48.278644 systemd[1]: Started sshd@26-10.0.0.124:22-10.0.0.1:43770.service - OpenSSH per-connection server daemon (10.0.0.1:43770). Nov 1 00:27:48.316938 sshd[5932]: Accepted publickey for core from 10.0.0.1 port 43770 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:27:48.318762 sshd[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:27:48.323566 systemd-logind[1574]: New session 27 of user core. Nov 1 00:27:48.330672 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:27:48.468854 sshd[5932]: pam_unix(sshd:session): session closed for user core Nov 1 00:27:48.473736 systemd[1]: sshd@26-10.0.0.124:22-10.0.0.1:43770.service: Deactivated successfully. Nov 1 00:27:48.476674 systemd-logind[1574]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:27:48.476721 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:27:48.477766 systemd-logind[1574]: Removed session 27. Nov 1 00:27:50.252371 kubelet[2736]: E1101 00:27:50.249876 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-gvjds" podUID="18da393d-9f84-487e-a8ed-8cbdbb46de00" Nov 1 00:27:50.252371 kubelet[2736]: E1101 00:27:50.249947 2736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-599794c67d-92gsc" podUID="151d3855-7594-4722-a64f-ba8ae7061d01"