Nov 1 00:15:58.023599 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:15:58.023622 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:15:58.023634 kernel: BIOS-provided physical RAM map: Nov 1 00:15:58.023640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:15:58.023647 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:15:58.023653 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:15:58.023661 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:15:58.023667 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:15:58.023674 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:15:58.023683 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:15:58.023690 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:15:58.023696 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:15:58.023708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:15:58.023715 kernel: NX (Execute Disable) protection: active Nov 1 00:15:58.023725 kernel: APIC: Static calls initialized Nov 1 00:15:58.023737 kernel: SMBIOS 2.8 present. Nov 1 00:15:58.023744 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:15:58.023751 kernel: Hypervisor detected: KVM Nov 1 00:15:58.023759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:15:58.023766 kernel: kvm-clock: using sched offset of 4066267894 cycles Nov 1 00:15:58.023774 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:15:58.023781 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:15:58.023789 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:15:58.023796 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:15:58.023806 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:15:58.023814 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:15:58.023821 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:15:58.023828 kernel: Using GB pages for direct mapping Nov 1 00:15:58.023835 kernel: ACPI: Early table checksum verification disabled Nov 1 00:15:58.023842 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:15:58.023850 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023871 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023882 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023896 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:15:58.023906 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023915 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023925 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023935 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:15:58.023944 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:15:58.023954 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:15:58.023979 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:15:58.023990 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:15:58.023998 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:15:58.024005 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:15:58.024013 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:15:58.024020 kernel: No NUMA configuration found Nov 1 00:15:58.024027 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:15:58.024037 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:15:58.024045 kernel: Zone ranges: Nov 1 00:15:58.024052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:15:58.024060 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:15:58.024067 kernel: Normal empty Nov 1 00:15:58.024074 kernel: Movable zone start for each node Nov 1 00:15:58.024082 kernel: Early memory node ranges Nov 1 00:15:58.024089 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:15:58.024096 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:15:58.024104 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:15:58.024114 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:15:58.024124 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:15:58.024132 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:15:58.024139 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:15:58.024147 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:15:58.024154 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:15:58.024162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:15:58.024169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:15:58.024177 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:15:58.024187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:15:58.024194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:15:58.024202 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:15:58.024210 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:15:58.024217 kernel: TSC deadline timer available Nov 1 00:15:58.024224 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:15:58.024232 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:15:58.024239 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:15:58.024249 kernel: kvm-guest: setup PV sched yield Nov 1 00:15:58.024259 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:15:58.024266 kernel: Booting paravirtualized kernel on KVM Nov 1 00:15:58.024274 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:15:58.024282 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:15:58.024289 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 1 00:15:58.024297 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 1 00:15:58.024304 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:15:58.024311 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:15:58.024319 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:15:58.024330 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:15:58.024338 kernel: random: crng init done Nov 1 00:15:58.024346 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:15:58.024353 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:15:58.024361 kernel: Fallback order for Node 0: 0 Nov 1 00:15:58.024368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:15:58.024376 kernel: Policy zone: DMA32 Nov 1 00:15:58.024383 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:15:58.024393 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 136900K reserved, 0K cma-reserved) Nov 1 00:15:58.024401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:15:58.024409 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:15:58.024416 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:15:58.024424 kernel: Dynamic Preempt: voluntary Nov 1 00:15:58.024431 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:15:58.024439 kernel: rcu: RCU event tracing is enabled. Nov 1 00:15:58.024447 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:15:58.024455 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:15:58.024465 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:15:58.024472 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:15:58.024480 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:15:58.024487 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:15:58.024497 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:15:58.024504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:15:58.024512 kernel: Console: colour VGA+ 80x25 Nov 1 00:15:58.024519 kernel: printk: console [ttyS0] enabled Nov 1 00:15:58.024527 kernel: ACPI: Core revision 20230628 Nov 1 00:15:58.024537 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:15:58.024545 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:15:58.024552 kernel: x2apic enabled Nov 1 00:15:58.024560 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:15:58.024567 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:15:58.024575 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:15:58.024582 kernel: kvm-guest: setup PV IPIs Nov 1 00:15:58.024590 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:15:58.024608 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:15:58.024616 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:15:58.024624 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:15:58.024631 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:15:58.024642 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:15:58.024650 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:15:58.024657 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:15:58.024665 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:15:58.024673 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:15:58.024684 kernel: active return thunk: retbleed_return_thunk Nov 1 00:15:58.024692 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:15:58.024704 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:15:58.024713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:15:58.024721 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:15:58.024731 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:15:58.024740 kernel: active return thunk: srso_return_thunk Nov 1 00:15:58.024749 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:15:58.024761 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:15:58.024768 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:15:58.024776 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:15:58.024785 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:15:58.024793 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:15:58.024801 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:15:58.024809 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:15:58.024817 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:15:58.024825 kernel: landlock: Up and running. Nov 1 00:15:58.024835 kernel: SELinux: Initializing. Nov 1 00:15:58.024843 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:15:58.024851 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:15:58.024872 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:15:58.024883 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:15:58.024894 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:15:58.024905 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:15:58.024916 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:15:58.024929 kernel: ... version: 0 Nov 1 00:15:58.024943 kernel: ... bit width: 48 Nov 1 00:15:58.024951 kernel: ... generic registers: 6 Nov 1 00:15:58.024967 kernel: ... value mask: 0000ffffffffffff Nov 1 00:15:58.024977 kernel: ... max period: 00007fffffffffff Nov 1 00:15:58.024985 kernel: ... fixed-purpose events: 0 Nov 1 00:15:58.024992 kernel: ... event mask: 000000000000003f Nov 1 00:15:58.025000 kernel: signal: max sigframe size: 1776 Nov 1 00:15:58.025008 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:15:58.025016 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:15:58.025028 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:15:58.025035 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:15:58.025043 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:15:58.025051 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:15:58.025059 kernel: smpboot: Max logical packages: 1 Nov 1 00:15:58.025067 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:15:58.025075 kernel: devtmpfs: initialized Nov 1 00:15:58.025082 kernel: x86/mm: Memory block size: 128MB Nov 1 00:15:58.025091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:15:58.025101 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:15:58.025109 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:15:58.025117 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:15:58.025125 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:15:58.025133 kernel: audit: type=2000 audit(1761956156.391:1): state=initialized audit_enabled=0 res=1 Nov 1 00:15:58.025141 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:15:58.025149 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:15:58.025157 kernel: cpuidle: using governor menu Nov 1 00:15:58.025165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:15:58.025175 kernel: dca service started, version 1.12.1 Nov 1 00:15:58.025183 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:15:58.025191 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:15:58.025199 kernel: PCI: Using configuration type 1 for base access Nov 1 00:15:58.025207 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:15:58.025215 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:15:58.025223 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:15:58.025231 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:15:58.025239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:15:58.025249 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:15:58.025257 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:15:58.025265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:15:58.025273 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:15:58.025281 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:15:58.025288 kernel: ACPI: Interpreter enabled Nov 1 00:15:58.025296 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:15:58.025304 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:15:58.025312 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:15:58.025323 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:15:58.025331 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:15:58.025339 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:15:58.025559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:15:58.025699 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:15:58.025829 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:15:58.025840 kernel: PCI host bridge to bus 0000:00 Nov 1 00:15:58.026031 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:15:58.026160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:15:58.026279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:15:58.026397 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:15:58.026515 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:15:58.026659 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:15:58.026797 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:15:58.027019 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:15:58.027171 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:15:58.027302 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:15:58.027431 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:15:58.027557 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:15:58.027685 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:15:58.027841 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:15:58.028019 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:15:58.028154 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:15:58.028320 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:15:58.028513 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:15:58.028688 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:15:58.028907 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:15:58.029080 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:15:58.029273 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:15:58.029424 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:15:58.029564 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:15:58.029695 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:15:58.029831 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:15:58.030013 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:15:58.030151 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:15:58.030297 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:15:58.030428 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:15:58.030556 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:15:58.030784 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:15:58.030987 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:15:58.031002 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:15:58.031016 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:15:58.031024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:15:58.031032 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:15:58.031040 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:15:58.031048 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:15:58.031056 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:15:58.031064 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:15:58.031072 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:15:58.031080 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:15:58.031090 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:15:58.031098 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:15:58.031106 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:15:58.031114 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:15:58.031122 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:15:58.031130 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:15:58.031138 kernel: iommu: Default domain type: Translated Nov 1 00:15:58.031146 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:15:58.031154 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:15:58.031165 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:15:58.031173 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:15:58.031181 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:15:58.031313 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:15:58.031441 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:15:58.031570 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:15:58.031581 kernel: vgaarb: loaded Nov 1 00:15:58.031589 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:15:58.031601 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:15:58.031609 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:15:58.031617 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:15:58.031625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:15:58.031633 kernel: pnp: PnP ACPI init Nov 1 00:15:58.031798 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:15:58.031811 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:15:58.031820 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:15:58.031828 kernel: NET: Registered PF_INET protocol family Nov 1 00:15:58.031840 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:15:58.031849 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:15:58.031872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:15:58.031881 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:15:58.031889 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:15:58.031897 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:15:58.031906 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:15:58.031914 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:15:58.031925 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:15:58.031934 kernel: NET: Registered PF_XDP protocol family Nov 1 00:15:58.032072 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:15:58.032193 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:15:58.032311 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:15:58.032456 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:15:58.032644 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:15:58.032770 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:15:58.032781 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:15:58.032794 kernel: Initialise system trusted keyrings Nov 1 00:15:58.032802 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:15:58.032811 kernel: Key type asymmetric registered Nov 1 00:15:58.032819 kernel: Asymmetric key parser 'x509' registered Nov 1 00:15:58.032827 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:15:58.032835 kernel: io scheduler mq-deadline registered Nov 1 00:15:58.032843 kernel: io scheduler kyber registered Nov 1 00:15:58.032851 kernel: io scheduler bfq registered Nov 1 00:15:58.032901 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:15:58.032923 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:15:58.032933 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:15:58.032941 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:15:58.032949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:15:58.032967 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:15:58.032976 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:15:58.032985 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:15:58.032993 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:15:58.033148 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:15:58.033166 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:15:58.033287 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:15:58.033414 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:15:57 UTC (1761956157) Nov 1 00:15:58.033538 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:15:58.033549 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:15:58.033558 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:15:58.033566 kernel: Segment Routing with IPv6 Nov 1 00:15:58.033574 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:15:58.033586 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:15:58.033595 kernel: Key type dns_resolver registered Nov 1 00:15:58.033603 kernel: IPI shorthand broadcast: enabled Nov 1 00:15:58.033612 kernel: sched_clock: Marking stable (1014003441, 305273981)->(1530623356, -211345934) Nov 1 00:15:58.033620 kernel: registered taskstats version 1 Nov 1 00:15:58.033628 kernel: Loading compiled-in X.509 certificates Nov 1 00:15:58.033637 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:15:58.033645 kernel: Key type .fscrypt registered Nov 1 00:15:58.033653 kernel: Key type fscrypt-provisioning registered Nov 1 00:15:58.033664 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:15:58.033672 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:15:58.033680 kernel: ima: No architecture policies found Nov 1 00:15:58.033688 kernel: clk: Disabling unused clocks Nov 1 00:15:58.033696 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:15:58.033705 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:15:58.033714 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:15:58.033724 kernel: Run /init as init process Nov 1 00:15:58.033732 kernel: with arguments: Nov 1 00:15:58.033744 kernel: /init Nov 1 00:15:58.033754 kernel: with environment: Nov 1 00:15:58.033763 kernel: HOME=/ Nov 1 00:15:58.033773 kernel: TERM=linux Nov 1 00:15:58.033784 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:15:58.033795 systemd[1]: Detected virtualization kvm. Nov 1 00:15:58.033804 systemd[1]: Detected architecture x86-64. Nov 1 00:15:58.033813 systemd[1]: Running in initrd. Nov 1 00:15:58.033824 systemd[1]: No hostname configured, using default hostname. Nov 1 00:15:58.033832 systemd[1]: Hostname set to . Nov 1 00:15:58.033841 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:15:58.033849 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:15:58.033969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:15:58.033979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:15:58.033988 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:15:58.033997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:15:58.034010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:15:58.034032 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:15:58.034045 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:15:58.034054 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:15:58.034068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:15:58.034076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:15:58.034085 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:15:58.034094 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:15:58.034103 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:15:58.034112 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:15:58.034120 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:15:58.034129 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:15:58.034138 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:15:58.034152 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:15:58.034164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:15:58.034176 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:15:58.034187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:15:58.034198 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:15:58.034211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:15:58.034223 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:15:58.034236 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:15:58.034252 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:15:58.034264 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:15:58.034276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:15:58.034287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:15:58.034298 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:15:58.034337 systemd-journald[193]: Collecting audit messages is disabled. Nov 1 00:15:58.034370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:15:58.034382 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:15:58.034411 systemd-journald[193]: Journal started Nov 1 00:15:58.034451 systemd-journald[193]: Runtime Journal (/run/log/journal/f2b3b91915974b24ae1c3a0a57d44ee3) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:15:58.036247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:15:58.032971 systemd-modules-load[194]: Inserted module 'overlay' Nov 1 00:15:58.118104 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:15:58.118133 kernel: Bridge firewalling registered Nov 1 00:15:58.118148 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:15:58.062469 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 1 00:15:58.118681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:15:58.122917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:15:58.134127 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:15:58.138999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:15:58.143604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:15:58.148495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:15:58.153350 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:15:58.157410 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:15:58.168126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:15:58.170618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:15:58.173884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:15:58.179062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:15:58.191434 dracut-cmdline[225]: dracut-dracut-053 Nov 1 00:15:58.192458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:15:58.199574 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:15:58.219033 systemd-resolved[234]: Positive Trust Anchors: Nov 1 00:15:58.219053 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:15:58.219086 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:15:58.223177 systemd-resolved[234]: Defaulting to hostname 'linux'. Nov 1 00:15:58.224980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:15:58.235464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:15:58.340916 kernel: SCSI subsystem initialized Nov 1 00:15:58.353935 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:15:58.368919 kernel: iscsi: registered transport (tcp) Nov 1 00:15:58.400908 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:15:58.401003 kernel: QLogic iSCSI HBA Driver Nov 1 00:15:58.473968 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:15:58.488255 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:15:58.526168 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:15:58.526260 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:15:58.528240 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:15:58.577938 kernel: raid6: avx2x4 gen() 25458 MB/s Nov 1 00:15:58.594940 kernel: raid6: avx2x2 gen() 20455 MB/s Nov 1 00:15:58.613021 kernel: raid6: avx2x1 gen() 22173 MB/s Nov 1 00:15:58.613152 kernel: raid6: using algorithm avx2x4 gen() 25458 MB/s Nov 1 00:15:58.630919 kernel: raid6: .... xor() 6907 MB/s, rmw enabled Nov 1 00:15:58.631028 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:15:58.654908 kernel: xor: automatically using best checksumming function avx Nov 1 00:15:58.852925 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:15:58.873838 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:15:58.889170 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:15:58.906832 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 1 00:15:58.913533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:15:58.921153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:15:58.950494 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 1 00:15:58.999206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:15:59.010159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:15:59.092483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:15:59.100412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:15:59.119356 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:15:59.125211 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:15:59.127612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:15:59.134570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:15:59.147073 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:15:59.149332 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:15:59.158557 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:15:59.158603 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:15:59.163834 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:15:59.163941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:15:59.166666 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:15:59.171939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:15:59.178617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:15:59.181247 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:15:59.191597 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:15:59.191635 kernel: AES CTR mode by8 optimization enabled Nov 1 00:15:59.192995 kernel: libata version 3.00 loaded. Nov 1 00:15:59.193020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:15:59.195037 kernel: GPT:9289727 != 19775487 Nov 1 00:15:59.195061 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:15:59.195072 kernel: GPT:9289727 != 19775487 Nov 1 00:15:59.195082 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:15:59.195098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:15:59.198316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:15:59.206012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:15:59.215273 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:15:59.215532 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:15:59.215547 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:15:59.215713 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:15:59.219875 kernel: scsi host0: ahci Nov 1 00:15:59.221886 kernel: scsi host1: ahci Nov 1 00:15:59.228021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:15:59.231601 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Nov 1 00:15:59.231629 kernel: scsi host2: ahci Nov 1 00:15:59.231919 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Nov 1 00:15:59.236889 kernel: scsi host3: ahci Nov 1 00:15:59.237099 kernel: scsi host4: ahci Nov 1 00:15:59.237266 kernel: scsi host5: ahci Nov 1 00:15:59.237418 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:15:59.237431 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:15:59.237442 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:15:59.237462 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:15:59.237472 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:15:59.237483 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:15:59.326529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:15:59.334230 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:15:59.348269 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 00:15:59.350549 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:15:59.361490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:15:59.373069 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:15:59.376440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:15:59.384787 disk-uuid[557]: Primary Header is updated. Nov 1 00:15:59.384787 disk-uuid[557]: Secondary Entries is updated. Nov 1 00:15:59.384787 disk-uuid[557]: Secondary Header is updated. Nov 1 00:15:59.390685 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:15:59.398894 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:15:59.401004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:15:59.550743 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:15:59.551104 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:15:59.551197 kernel: ata3.00: applying bridge limits Nov 1 00:15:59.551215 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:15:59.551228 kernel: ata3.00: configured for UDMA/100 Nov 1 00:16:00.385942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:16:00.387941 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:16:00.388004 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:16:00.389897 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:16:00.393912 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:16:00.451922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:16:00.452182 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:16:00.453482 disk-uuid[559]: The operation has completed successfully. Nov 1 00:16:00.456220 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:16:00.470935 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:16:00.787031 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:16:00.789174 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:16:00.802560 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:16:00.813615 sh[594]: Success Nov 1 00:16:00.835911 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:16:00.880591 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:16:00.896027 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:16:00.899534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:16:00.919523 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:16:00.919595 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:00.919607 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:16:00.921251 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:16:00.922513 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:16:00.929399 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:16:00.931366 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:16:00.942075 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:16:00.945551 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:16:00.961280 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:00.961363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:00.961386 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:00.965924 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:00.982314 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:16:01.012066 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:01.107344 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:16:01.123180 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:16:01.124648 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:16:01.129691 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:16:01.188989 systemd-networkd[772]: lo: Link UP Nov 1 00:16:01.189009 systemd-networkd[772]: lo: Gained carrier Nov 1 00:16:01.194487 systemd-networkd[772]: Enumeration completed Nov 1 00:16:01.194690 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:16:01.195523 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:01.195532 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:16:01.195833 systemd[1]: Reached target network.target - Network. Nov 1 00:16:01.204158 systemd-networkd[772]: eth0: Link UP Nov 1 00:16:01.204165 systemd-networkd[772]: eth0: Gained carrier Nov 1 00:16:01.204180 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:01.239127 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:16:01.327337 ignition[775]: Ignition 2.19.0 Nov 1 00:16:01.327356 ignition[775]: Stage: fetch-offline Nov 1 00:16:01.327440 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:01.327456 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:01.327617 ignition[775]: parsed url from cmdline: "" Nov 1 00:16:01.327623 ignition[775]: no config URL provided Nov 1 00:16:01.327631 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:16:01.327646 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:16:01.327690 ignition[775]: op(1): [started] loading QEMU firmware config module Nov 1 00:16:01.327698 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:16:01.338084 ignition[775]: op(1): [finished] loading QEMU firmware config module Nov 1 00:16:01.338123 ignition[775]: QEMU firmware config was not found. Ignoring... Nov 1 00:16:01.440936 ignition[775]: parsing config with SHA512: e1c235e5f2cd462696c92d927c3ec9177f2c624c79fe001c3c09a6db45161d5fe749a0c9cf4f5bdbfceffdbea117d80c88ebd154b032325086fcebbd33848bd3 Nov 1 00:16:01.444773 unknown[775]: fetched base config from "system" Nov 1 00:16:01.444791 unknown[775]: fetched user config from "qemu" Nov 1 00:16:01.445240 ignition[775]: fetch-offline: fetch-offline passed Nov 1 00:16:01.448566 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:16:01.445347 ignition[775]: Ignition finished successfully Nov 1 00:16:01.456470 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:16:01.469118 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:16:01.500913 ignition[786]: Ignition 2.19.0 Nov 1 00:16:01.500927 ignition[786]: Stage: kargs Nov 1 00:16:01.501118 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:01.501130 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:01.506927 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:16:01.501983 ignition[786]: kargs: kargs passed Nov 1 00:16:01.502034 ignition[786]: Ignition finished successfully Nov 1 00:16:01.535059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:16:01.565836 ignition[793]: Ignition 2.19.0 Nov 1 00:16:01.565879 ignition[793]: Stage: disks Nov 1 00:16:01.570090 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:16:01.566199 ignition[793]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:01.579944 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:16:01.566224 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:01.596103 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:16:01.567792 ignition[793]: disks: disks passed Nov 1 00:16:01.598754 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:16:01.567897 ignition[793]: Ignition finished successfully Nov 1 00:16:01.602602 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:16:01.606844 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:16:01.616209 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:16:01.632631 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 00:16:01.752906 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:16:01.762064 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:16:01.900918 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:16:01.901584 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:16:01.903070 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:16:01.918107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:16:01.921427 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:16:01.930495 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Nov 1 00:16:01.924477 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:16:01.924533 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:16:01.945714 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:01.945758 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:01.947497 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:01.947526 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:01.924562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:16:01.935310 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:16:01.949315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:16:01.961354 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:16:02.002684 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:16:02.010231 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:16:02.014366 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:16:02.018795 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:16:02.145492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:16:02.164069 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:16:02.167360 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:16:02.182265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:16:02.186333 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:02.211451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:16:02.224618 ignition[924]: INFO : Ignition 2.19.0 Nov 1 00:16:02.224618 ignition[924]: INFO : Stage: mount Nov 1 00:16:02.228405 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:02.228405 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:02.228405 ignition[924]: INFO : mount: mount passed Nov 1 00:16:02.228405 ignition[924]: INFO : Ignition finished successfully Nov 1 00:16:02.240278 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:16:02.255315 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:16:02.272164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:16:02.295434 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Nov 1 00:16:02.295520 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:16:02.295537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:16:02.298265 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:16:02.301913 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 00:16:02.305677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:16:02.345168 ignition[955]: INFO : Ignition 2.19.0 Nov 1 00:16:02.345168 ignition[955]: INFO : Stage: files Nov 1 00:16:02.348119 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:02.348119 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:02.348119 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:16:02.354291 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:16:02.354291 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:16:02.359218 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:16:02.359218 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:16:02.359218 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:16:02.357914 unknown[955]: wrote ssh authorized keys file for user: core Nov 1 00:16:02.367725 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:16:02.367725 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:16:02.434610 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:16:02.632084 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:16:02.632084 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:02.638907 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:02.638907 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:02.638907 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:02.638907 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:16:02.651760 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:16:02.676230 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:16:02.676230 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 00:16:02.889353 systemd-networkd[772]: eth0: Gained IPv6LL Nov 1 00:16:03.113901 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:16:04.395670 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 00:16:04.395670 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:16:04.403455 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:04.407897 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:04.407897 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:16:04.407897 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:16:04.416479 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:16:04.416479 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:16:04.416479 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:16:04.416479 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:16:04.464970 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:16:04.473831 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:16:04.477960 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:16:04.477960 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:04.484091 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:04.486902 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:04.490343 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:04.493203 ignition[955]: INFO : files: files passed Nov 1 00:16:04.494433 ignition[955]: INFO : Ignition finished successfully Nov 1 00:16:04.498481 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:16:04.524198 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:16:04.528285 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:16:04.535430 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:16:04.535626 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:16:04.563711 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:16:04.569582 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:04.569582 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:04.576170 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:04.581831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:16:04.583118 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:16:04.593280 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:16:04.633406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:16:04.633589 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:16:04.637731 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:16:04.641582 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:16:04.643886 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:16:04.654142 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:16:04.675537 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:16:04.690340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:16:04.703017 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:16:04.705510 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:16:04.710969 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:16:04.714839 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:16:04.715065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:16:04.719381 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:16:04.722748 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:16:04.726920 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:16:04.731089 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:16:04.735194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:16:04.739483 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:16:04.743789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:16:04.748457 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:16:04.752209 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:16:04.756537 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:16:04.760288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:16:04.761502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:16:04.764947 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:16:04.768052 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:16:04.771915 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:16:04.772161 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:16:04.776246 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:16:04.776481 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:16:04.780675 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:16:04.780869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:16:04.784272 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:16:04.787963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:16:04.792048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:16:04.796253 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:16:04.800242 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:16:04.803874 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:16:04.804006 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:16:04.807276 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:16:04.807374 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:16:04.812227 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:16:04.813234 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:16:04.816341 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:16:04.816491 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:16:04.828222 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:16:04.833042 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:16:04.835966 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:16:04.836197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:16:04.839900 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:16:04.840130 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:16:04.858247 ignition[1009]: INFO : Ignition 2.19.0 Nov 1 00:16:04.858247 ignition[1009]: INFO : Stage: umount Nov 1 00:16:04.858247 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:16:04.858247 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:16:04.858247 ignition[1009]: INFO : umount: umount passed Nov 1 00:16:04.858247 ignition[1009]: INFO : Ignition finished successfully Nov 1 00:16:04.848400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:16:04.848522 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:16:04.859881 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:16:04.860033 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:16:04.863719 systemd[1]: Stopped target network.target - Network. Nov 1 00:16:04.866963 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:16:04.867056 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:16:04.870941 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:16:04.871005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:16:04.874201 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:16:04.874284 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:16:04.877705 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:16:04.877891 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:16:04.881830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:16:04.885827 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:16:04.887958 systemd-networkd[772]: eth0: DHCPv6 lease lost Nov 1 00:16:04.893501 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:16:04.893977 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:16:04.897826 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:16:04.898158 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:16:04.905535 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:16:04.905647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:16:04.924116 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:16:04.926184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:16:04.926324 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:16:04.930451 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:16:04.930563 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:16:04.982389 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:16:04.982515 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:16:04.987535 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:16:04.987648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:16:04.992867 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:16:05.010966 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:16:05.017381 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:16:05.017548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:16:05.026981 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:16:05.028719 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:16:05.034042 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:16:05.036020 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:16:05.041003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:16:05.041069 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:16:05.046345 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:16:05.046417 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:16:05.051490 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:16:05.051560 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:16:05.057129 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:16:05.057235 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:16:05.062527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:16:05.062609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:16:05.069450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:16:05.072384 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:16:05.091195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:16:05.092393 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:16:05.092505 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:16:05.097024 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:16:05.097301 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:16:05.101455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:16:05.101563 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:16:05.106570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:05.106651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:05.131917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:16:05.132105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:16:05.133615 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:16:05.147141 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:16:05.157989 systemd[1]: Switching root. Nov 1 00:16:05.222327 systemd-journald[193]: Journal stopped Nov 1 00:16:06.938989 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 1 00:16:06.939084 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:16:06.939104 kernel: SELinux: policy capability open_perms=1 Nov 1 00:16:06.939119 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:16:06.939134 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:16:06.939159 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:16:06.939175 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:16:06.939190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:16:06.939206 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:16:06.939227 kernel: audit: type=1403 audit(1761956165.913:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:16:06.939250 systemd[1]: Successfully loaded SELinux policy in 49.860ms. Nov 1 00:16:06.939315 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.476ms. Nov 1 00:16:06.939336 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:16:06.939360 systemd[1]: Detected virtualization kvm. Nov 1 00:16:06.939376 systemd[1]: Detected architecture x86-64. Nov 1 00:16:06.939391 systemd[1]: Detected first boot. Nov 1 00:16:06.939408 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:16:06.939423 zram_generator::config[1054]: No configuration found. Nov 1 00:16:06.939441 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:16:06.939458 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:16:06.939474 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:16:06.939491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:16:06.939520 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:16:06.939537 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:16:06.939554 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:16:06.939569 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:16:06.939589 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:16:06.939605 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:16:06.939621 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:16:06.939637 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:16:06.939662 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:16:06.939679 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:16:06.939708 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:16:06.939727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:16:06.939753 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:16:06.939771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:16:06.939787 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:16:06.939805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:16:06.939821 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:16:06.939845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:16:06.939879 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:16:06.939896 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:16:06.939912 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:16:06.939927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:16:06.939943 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:16:06.939958 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:16:06.939973 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:16:06.939998 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:16:06.940015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:16:06.940032 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:16:06.940047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:16:06.940063 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:16:06.940078 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:16:06.940101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:16:06.940117 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:16:06.940133 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:06.940157 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:16:06.940173 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:16:06.940188 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:16:06.940206 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:16:06.940222 systemd[1]: Reached target machines.target - Containers. Nov 1 00:16:06.940238 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:16:06.940254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:06.940270 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:16:06.940294 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:16:06.940310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:06.940326 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:16:06.940341 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:06.940356 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:16:06.940372 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:06.940389 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:16:06.940404 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:16:06.940419 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:16:06.940443 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:16:06.940464 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:16:06.940480 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:16:06.940496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:16:06.940512 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:16:06.940556 systemd-journald[1117]: Collecting audit messages is disabled. Nov 1 00:16:06.940586 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:16:06.940613 kernel: fuse: init (API version 7.39) Nov 1 00:16:06.940629 systemd-journald[1117]: Journal started Nov 1 00:16:06.940656 systemd-journald[1117]: Runtime Journal (/run/log/journal/f2b3b91915974b24ae1c3a0a57d44ee3) is 6.0M, max 48.4M, 42.3M free. Nov 1 00:16:06.951910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:16:06.952034 kernel: loop: module loaded Nov 1 00:16:06.952078 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:16:06.952101 systemd[1]: Stopped verity-setup.service. Nov 1 00:16:06.569721 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:16:06.593426 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:16:06.594114 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:16:06.594676 systemd[1]: systemd-journald.service: Consumed 1.114s CPU time. Nov 1 00:16:06.963998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:06.964096 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:16:06.968286 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:16:06.969883 kernel: ACPI: bus type drm_connector registered Nov 1 00:16:06.971635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:16:06.973840 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:16:06.975828 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:16:06.977876 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:16:06.980070 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:16:06.982066 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:16:06.984650 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:16:06.984937 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:16:06.987848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:06.988094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:06.990891 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:06.991212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:16:06.993826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:06.994159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:06.996920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:16:06.997184 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:16:07.000684 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:07.001045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:07.003603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:16:07.006193 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:16:07.009363 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:16:07.011782 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:16:07.042523 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:16:07.050101 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:16:07.053788 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:16:07.056135 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:16:07.056246 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:16:07.059368 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:16:07.063120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:16:07.066606 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:16:07.068688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:07.071683 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:16:07.076094 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:16:07.078933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:07.081735 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:16:07.084526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:16:07.086424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:16:07.090618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:16:07.098498 systemd-journald[1117]: Time spent on flushing to /var/log/journal/f2b3b91915974b24ae1c3a0a57d44ee3 is 42.861ms for 951 entries. Nov 1 00:16:07.098498 systemd-journald[1117]: System Journal (/var/log/journal/f2b3b91915974b24ae1c3a0a57d44ee3) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:16:07.170345 systemd-journald[1117]: Received client request to flush runtime journal. Nov 1 00:16:07.170396 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 00:16:07.101575 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:16:07.114238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:16:07.118133 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:16:07.122070 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:16:07.127518 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:16:07.130651 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:16:07.160168 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:16:07.179575 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:16:07.186754 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:16:07.191521 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:16:07.200881 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:16:07.216595 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Nov 1 00:16:07.216623 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Nov 1 00:16:07.224403 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:16:07.239590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:16:07.260904 kernel: loop1: detected capacity change from 0 to 229808 Nov 1 00:16:07.257136 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:16:07.264580 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:16:07.265531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:16:07.271482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:16:07.302762 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:16:07.302987 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:16:07.322416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:16:07.367372 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 1 00:16:07.367396 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 1 00:16:07.373407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:16:07.382017 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:16:07.396144 kernel: loop4: detected capacity change from 0 to 229808 Nov 1 00:16:07.410911 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:16:07.432905 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 1 00:16:07.433791 (sd-merge)[1195]: Merged extensions into '/usr'. Nov 1 00:16:07.594232 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:16:07.594248 systemd[1]: Reloading... Nov 1 00:16:07.690890 zram_generator::config[1220]: No configuration found. Nov 1 00:16:08.038635 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:16:08.042662 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:08.129257 systemd[1]: Reloading finished in 534 ms. Nov 1 00:16:08.173161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:16:08.176604 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:16:08.192218 systemd[1]: Starting ensure-sysext.service... Nov 1 00:16:08.202773 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:16:08.217637 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:16:08.217775 systemd[1]: Reloading... Nov 1 00:16:08.261908 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:16:08.262554 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:16:08.270690 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:16:08.271928 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 1 00:16:08.275063 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 1 00:16:08.287355 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:16:08.287375 systemd-tmpfiles[1259]: Skipping /boot Nov 1 00:16:08.307252 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:16:08.307263 systemd-tmpfiles[1259]: Skipping /boot Nov 1 00:16:08.392926 zram_generator::config[1286]: No configuration found. Nov 1 00:16:08.527623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:08.599634 systemd[1]: Reloading finished in 381 ms. Nov 1 00:16:08.625210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:16:08.644067 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:08.672958 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:16:08.681696 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:16:08.703814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:16:08.711881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:16:08.735688 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:16:08.783477 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:16:08.788195 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:16:08.794769 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:08.795348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:08.800880 augenrules[1345]: No rules Nov 1 00:16:08.805307 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:08.819126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:08.825724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:08.829118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:08.836293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:16:08.845230 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:16:08.848033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:08.850382 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:16:08.871418 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:08.874466 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:16:08.878717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:08.878987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:08.882441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:08.882789 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:08.886238 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:08.886910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:08.896885 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Nov 1 00:16:08.905965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:16:08.913015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:16:08.928752 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:08.929061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:08.938209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:08.951576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:08.962634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:08.965653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:08.966791 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:16:08.967630 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:08.969652 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:16:08.973023 systemd-resolved[1334]: Positive Trust Anchors: Nov 1 00:16:08.973038 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:16:08.973078 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:16:08.976446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:08.976742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:08.983677 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:08.984123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:08.992017 systemd-resolved[1334]: Defaulting to hostname 'linux'. Nov 1 00:16:08.992526 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:08.993396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:08.997537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:16:09.016001 systemd[1]: Finished ensure-sysext.service. Nov 1 00:16:09.024079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:16:09.028610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:09.028885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:16:09.033883 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1380) Nov 1 00:16:09.040311 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:16:09.050141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:16:09.100248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:16:09.109136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:16:09.113362 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:16:09.127175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:16:09.145236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:16:09.147719 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:16:09.147782 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:16:09.148952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:09.149323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:16:09.152729 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:09.153138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:16:09.156341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:09.156592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:16:09.158932 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:16:09.161667 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:09.161999 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:16:09.164924 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:16:09.176036 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:16:09.176527 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:16:09.176807 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:16:09.176761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:16:09.192891 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:16:09.204302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:16:09.382133 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:16:09.384154 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:16:09.386651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:09.386808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:16:09.407281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:16:09.411511 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:16:09.516228 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:16:09.519162 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:16:09.535455 systemd-networkd[1404]: lo: Link UP Nov 1 00:16:09.535474 systemd-networkd[1404]: lo: Gained carrier Nov 1 00:16:09.540983 systemd-networkd[1404]: Enumeration completed Nov 1 00:16:09.541162 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:16:09.542486 systemd[1]: Reached target network.target - Network. Nov 1 00:16:09.542822 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:09.542829 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:16:09.546491 systemd-networkd[1404]: eth0: Link UP Nov 1 00:16:09.546510 systemd-networkd[1404]: eth0: Gained carrier Nov 1 00:16:09.546536 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:16:09.563947 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:16:09.583519 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.38/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:16:09.585674 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Nov 1 00:16:09.590561 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:16:09.590652 systemd-timesyncd[1407]: Initial clock synchronization to Sat 2025-11-01 00:16:09.753983 UTC. Nov 1 00:16:09.605967 kernel: kvm_amd: TSC scaling supported Nov 1 00:16:09.606157 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:16:09.606188 kernel: kvm_amd: Nested Paging enabled Nov 1 00:16:09.606215 kernel: kvm_amd: LBR virtualization supported Nov 1 00:16:09.606246 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:16:09.606274 kernel: kvm_amd: Virtual GIF supported Nov 1 00:16:09.636935 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:16:09.659789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:16:09.679148 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:16:09.701184 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:16:09.731723 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:09.955908 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:16:09.959945 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:16:09.962501 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:16:09.965014 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:16:09.967441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:16:09.970595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:16:09.974060 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:16:09.977246 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:16:09.979684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:16:09.979727 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:16:09.981298 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:16:09.986552 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:16:09.990838 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:16:10.005474 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:16:10.013379 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:16:10.016612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:16:10.022332 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:16:10.024228 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:16:10.026395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:16:10.026434 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:16:10.028684 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:16:10.035638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:16:10.037422 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:10.040380 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:16:10.046815 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:16:10.048845 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:16:10.053369 jq[1438]: false Nov 1 00:16:10.054360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:16:10.061155 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:16:10.066666 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:16:10.078192 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:16:10.083824 extend-filesystems[1439]: Found loop3 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found loop4 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found loop5 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found sr0 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda1 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda2 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda3 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found usr Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda4 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda6 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda7 Nov 1 00:16:10.083824 extend-filesystems[1439]: Found vda9 Nov 1 00:16:10.083824 extend-filesystems[1439]: Checking size of /dev/vda9 Nov 1 00:16:10.172172 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Nov 1 00:16:10.172218 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:16:10.172249 extend-filesystems[1439]: Resized partition /dev/vda9 Nov 1 00:16:10.097672 dbus-daemon[1437]: [system] SELinux support is enabled Nov 1 00:16:10.094315 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:16:10.174572 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:16:10.096842 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:16:10.098587 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:16:10.103698 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:16:10.181790 update_engine[1453]: I20251101 00:16:10.137521 1453 main.cc:92] Flatcar Update Engine starting Nov 1 00:16:10.181790 update_engine[1453]: I20251101 00:16:10.139356 1453 update_check_scheduler.cc:74] Next update check in 6m51s Nov 1 00:16:10.114356 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:16:10.184526 jq[1457]: true Nov 1 00:16:10.119716 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:16:10.126292 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:16:10.147544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:16:10.147966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:16:10.148456 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:16:10.148985 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:16:10.162577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:16:10.162864 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:16:10.184612 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:16:10.193114 jq[1464]: true Nov 1 00:16:10.220364 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:16:10.223049 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:16:10.223296 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:16:10.242315 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:16:10.242485 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:16:10.259360 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:16:10.287035 tar[1463]: linux-amd64/LICENSE Nov 1 00:16:10.290917 tar[1463]: linux-amd64/helm Nov 1 00:16:10.372560 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:16:10.372594 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:16:10.382313 systemd-logind[1452]: New seat seat0. Nov 1 00:16:10.385144 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:16:10.394007 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:16:10.409600 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:16:10.414928 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:16:10.435934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:16:10.546981 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:16:10.546981 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:16:10.546981 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:16:10.559587 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Nov 1 00:16:10.561418 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:16:10.561856 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:16:10.564679 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:16:10.565047 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:16:10.568502 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:16:10.577606 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:16:10.585182 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:16:10.585558 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:16:10.598369 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:16:10.733560 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:16:10.748534 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:16:10.757432 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:16:10.760800 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:16:11.199919 containerd[1465]: time="2025-11-01T00:16:11.198680315Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:16:11.242401 containerd[1465]: time="2025-11-01T00:16:11.242284310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.245022 containerd[1465]: time="2025-11-01T00:16:11.244940211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245114311Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245153309Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245439130Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245465615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245591805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245613415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245941536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245966430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.245989325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.246035094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.246183323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247354 containerd[1465]: time="2025-11-01T00:16:11.246538776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247899 containerd[1465]: time="2025-11-01T00:16:11.246720995Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:16:11.247899 containerd[1465]: time="2025-11-01T00:16:11.246743879Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:16:11.247899 containerd[1465]: time="2025-11-01T00:16:11.246968399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:16:11.247899 containerd[1465]: time="2025-11-01T00:16:11.247090725Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:16:11.284311 containerd[1465]: time="2025-11-01T00:16:11.278044850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:16:11.284311 containerd[1465]: time="2025-11-01T00:16:11.279698974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:16:11.284537 containerd[1465]: time="2025-11-01T00:16:11.284337131Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:16:11.284537 containerd[1465]: time="2025-11-01T00:16:11.284407834Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:16:11.284537 containerd[1465]: time="2025-11-01T00:16:11.284430179Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:16:11.284791 containerd[1465]: time="2025-11-01T00:16:11.284727717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:16:11.285155 containerd[1465]: time="2025-11-01T00:16:11.285108716Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:16:11.285309 containerd[1465]: time="2025-11-01T00:16:11.285270569Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:16:11.285309 containerd[1465]: time="2025-11-01T00:16:11.285294177Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:16:11.285377 containerd[1465]: time="2025-11-01T00:16:11.285311300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:16:11.285377 containerd[1465]: time="2025-11-01T00:16:11.285333297Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285381871Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285400023Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285421277Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285438613Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285457571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285479395Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285507388Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285538839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285576868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285595133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285610409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285627114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285643685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286382 containerd[1465]: time="2025-11-01T00:16:11.285658941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285675280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285705016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285760168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285782920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285801399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285818552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285837397Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285865851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285896975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285911619Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285970900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.285994448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:16:11.286892 containerd[1465]: time="2025-11-01T00:16:11.286007654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:16:11.287255 containerd[1465]: time="2025-11-01T00:16:11.286022614Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:16:11.287255 containerd[1465]: time="2025-11-01T00:16:11.286034832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.287255 containerd[1465]: time="2025-11-01T00:16:11.286049497Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:16:11.287255 containerd[1465]: time="2025-11-01T00:16:11.286061490Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:16:11.287255 containerd[1465]: time="2025-11-01T00:16:11.286075420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.286399015Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.286487003Z" level=info msg="Connect containerd service" Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.286540961Z" level=info msg="using legacy CRI server" Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.286552384Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.286691485Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:16:11.287403 containerd[1465]: time="2025-11-01T00:16:11.287574942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.288869401Z" level=info msg="Start subscribing containerd event" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.288935403Z" level=info msg="Start recovering state" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.289028736Z" level=info msg="Start event monitor" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.289062849Z" level=info msg="Start snapshots syncer" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.289079451Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:16:11.289637 containerd[1465]: time="2025-11-01T00:16:11.289093544Z" level=info msg="Start streaming server" Nov 1 00:16:11.291307 containerd[1465]: time="2025-11-01T00:16:11.291279508Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:16:11.291388 containerd[1465]: time="2025-11-01T00:16:11.291346407Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:16:11.291440 containerd[1465]: time="2025-11-01T00:16:11.291421281Z" level=info msg="containerd successfully booted in 0.119134s" Nov 1 00:16:11.292032 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:16:11.407242 systemd-networkd[1404]: eth0: Gained IPv6LL Nov 1 00:16:11.433064 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:16:11.437723 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:16:11.485407 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:16:11.491147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:16:11.503948 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:16:11.542511 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:16:11.544126 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:16:11.548441 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:16:11.567028 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:16:11.924071 tar[1463]: linux-amd64/README.md Nov 1 00:16:11.947046 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:16:12.168825 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:16:12.190384 systemd[1]: Started sshd@0-10.0.0.38:22-10.0.0.1:43794.service - OpenSSH per-connection server daemon (10.0.0.1:43794). Nov 1 00:16:12.816141 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 43794 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:12.824603 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:12.849940 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:16:12.871729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:16:12.889828 systemd-logind[1452]: New session 1 of user core. Nov 1 00:16:12.941483 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:16:12.964823 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:16:13.127782 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:16:13.501467 systemd[1550]: Queued start job for default target default.target. Nov 1 00:16:13.654213 systemd[1550]: Created slice app.slice - User Application Slice. Nov 1 00:16:13.654248 systemd[1550]: Reached target paths.target - Paths. Nov 1 00:16:13.654262 systemd[1550]: Reached target timers.target - Timers. Nov 1 00:16:13.662735 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:16:13.696411 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:16:13.696613 systemd[1550]: Reached target sockets.target - Sockets. Nov 1 00:16:13.696634 systemd[1550]: Reached target basic.target - Basic System. Nov 1 00:16:13.696701 systemd[1550]: Reached target default.target - Main User Target. Nov 1 00:16:13.696888 systemd[1550]: Startup finished in 544ms. Nov 1 00:16:13.698174 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:16:13.731544 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:16:13.910314 systemd[1]: Started sshd@1-10.0.0.38:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Nov 1 00:16:14.213635 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:14.223481 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:14.264566 systemd-logind[1452]: New session 2 of user core. Nov 1 00:16:14.279001 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:16:14.371127 sshd[1561]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:14.386253 systemd[1]: sshd@1-10.0.0.38:22-10.0.0.1:43804.service: Deactivated successfully. Nov 1 00:16:14.388589 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:16:14.391155 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:16:14.414593 systemd[1]: Started sshd@2-10.0.0.38:22-10.0.0.1:43816.service - OpenSSH per-connection server daemon (10.0.0.1:43816). Nov 1 00:16:14.423732 systemd-logind[1452]: Removed session 2. Nov 1 00:16:14.486816 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 43816 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:14.489742 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:14.523032 systemd-logind[1452]: New session 3 of user core. Nov 1 00:16:14.534235 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:16:14.623884 sshd[1568]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:14.640888 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:16:14.642948 systemd[1]: sshd@2-10.0.0.38:22-10.0.0.1:43816.service: Deactivated successfully. Nov 1 00:16:14.650409 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:16:14.668149 systemd-logind[1452]: Removed session 3. Nov 1 00:16:15.408470 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:16:15.426119 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:16:15.428701 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:16:15.434492 systemd[1]: Startup finished in 1.193s (kernel) + 8.150s (initrd) + 9.569s (userspace) = 18.913s. Nov 1 00:16:17.553377 kubelet[1578]: E1101 00:16:17.553292 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:16:17.564468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:16:17.565692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:16:17.568539 systemd[1]: kubelet.service: Consumed 3.392s CPU time. Nov 1 00:16:24.746172 systemd[1]: Started sshd@3-10.0.0.38:22-10.0.0.1:42974.service - OpenSSH per-connection server daemon (10.0.0.1:42974). Nov 1 00:16:24.833179 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 42974 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:24.837227 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:24.854558 systemd-logind[1452]: New session 4 of user core. Nov 1 00:16:24.867412 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:16:24.973395 sshd[1593]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:25.002301 systemd[1]: sshd@3-10.0.0.38:22-10.0.0.1:42974.service: Deactivated successfully. Nov 1 00:16:25.008700 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:16:25.018611 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:16:25.031531 systemd[1]: Started sshd@4-10.0.0.38:22-10.0.0.1:42986.service - OpenSSH per-connection server daemon (10.0.0.1:42986). Nov 1 00:16:25.040587 systemd-logind[1452]: Removed session 4. Nov 1 00:16:25.094749 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 42986 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:25.097598 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:25.114820 systemd-logind[1452]: New session 5 of user core. Nov 1 00:16:25.125278 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:16:25.213464 sshd[1600]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:25.228638 systemd[1]: sshd@4-10.0.0.38:22-10.0.0.1:42986.service: Deactivated successfully. Nov 1 00:16:25.233196 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:16:25.241195 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:16:25.253561 systemd[1]: Started sshd@5-10.0.0.38:22-10.0.0.1:42994.service - OpenSSH per-connection server daemon (10.0.0.1:42994). Nov 1 00:16:25.257142 systemd-logind[1452]: Removed session 5. Nov 1 00:16:25.345132 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42994 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:25.348837 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:25.353878 systemd-logind[1452]: New session 6 of user core. Nov 1 00:16:25.364238 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:16:25.435827 sshd[1607]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:25.445898 systemd[1]: sshd@5-10.0.0.38:22-10.0.0.1:42994.service: Deactivated successfully. Nov 1 00:16:25.448401 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:16:25.453707 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:16:25.469283 systemd[1]: Started sshd@6-10.0.0.38:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Nov 1 00:16:25.475570 systemd-logind[1452]: Removed session 6. Nov 1 00:16:25.551084 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:25.551511 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:25.565127 systemd-logind[1452]: New session 7 of user core. Nov 1 00:16:25.575826 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:16:25.659608 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:16:25.660337 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:25.688081 sudo[1617]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:25.692733 sshd[1614]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:25.711119 systemd[1]: sshd@6-10.0.0.38:22-10.0.0.1:43004.service: Deactivated successfully. Nov 1 00:16:25.719474 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:16:25.722784 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:16:25.730664 systemd[1]: Started sshd@7-10.0.0.38:22-10.0.0.1:43006.service - OpenSSH per-connection server daemon (10.0.0.1:43006). Nov 1 00:16:25.733888 systemd-logind[1452]: Removed session 7. Nov 1 00:16:25.786781 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 43006 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:25.789225 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:25.801553 systemd-logind[1452]: New session 8 of user core. Nov 1 00:16:25.810627 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:16:25.901830 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:16:25.902430 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:25.927202 sudo[1626]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:25.939638 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:16:25.940836 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:25.971885 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:25.976378 auditctl[1629]: No rules Nov 1 00:16:25.977492 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:16:25.977793 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:25.987800 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:16:26.045742 augenrules[1647]: No rules Nov 1 00:16:26.047354 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:16:26.049785 sudo[1625]: pam_unix(sudo:session): session closed for user root Nov 1 00:16:26.053366 sshd[1622]: pam_unix(sshd:session): session closed for user core Nov 1 00:16:26.074162 systemd[1]: sshd@7-10.0.0.38:22-10.0.0.1:43006.service: Deactivated successfully. Nov 1 00:16:26.077490 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:16:26.084162 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:16:26.103060 systemd[1]: Started sshd@8-10.0.0.38:22-10.0.0.1:34570.service - OpenSSH per-connection server daemon (10.0.0.1:34570). Nov 1 00:16:26.114606 systemd-logind[1452]: Removed session 8. Nov 1 00:16:26.192655 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 34570 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:16:26.194421 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:16:26.224908 systemd-logind[1452]: New session 9 of user core. Nov 1 00:16:26.237403 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:16:26.305508 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:16:26.306372 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:16:27.746444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:16:27.777567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:16:28.192390 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:16:28.194060 (dockerd)[1680]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:16:28.303251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:16:28.366442 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:16:28.678034 kubelet[1685]: E1101 00:16:28.675598 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:16:28.705425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:16:28.705921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:16:29.640973 dockerd[1680]: time="2025-11-01T00:16:29.634802208Z" level=info msg="Starting up" Nov 1 00:16:30.311511 dockerd[1680]: time="2025-11-01T00:16:30.311318077Z" level=info msg="Loading containers: start." Nov 1 00:16:30.646936 kernel: Initializing XFRM netlink socket Nov 1 00:16:30.979802 systemd-networkd[1404]: docker0: Link UP Nov 1 00:16:31.061889 dockerd[1680]: time="2025-11-01T00:16:31.060943422Z" level=info msg="Loading containers: done." Nov 1 00:16:31.170375 dockerd[1680]: time="2025-11-01T00:16:31.170249177Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:16:31.170703 dockerd[1680]: time="2025-11-01T00:16:31.170441254Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:16:31.178473 dockerd[1680]: time="2025-11-01T00:16:31.173951752Z" level=info msg="Daemon has completed initialization" Nov 1 00:16:31.356444 dockerd[1680]: time="2025-11-01T00:16:31.355307351Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:16:31.355942 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:16:33.010421 containerd[1465]: time="2025-11-01T00:16:33.009969295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 00:16:34.096301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796478399.mount: Deactivated successfully. Nov 1 00:16:36.321510 containerd[1465]: time="2025-11-01T00:16:36.319911333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:36.323802 containerd[1465]: time="2025-11-01T00:16:36.323156454Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 1 00:16:36.325833 containerd[1465]: time="2025-11-01T00:16:36.325700336Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:36.331421 containerd[1465]: time="2025-11-01T00:16:36.331339605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:36.335109 containerd[1465]: time="2025-11-01T00:16:36.334364194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.324301078s" Nov 1 00:16:36.335109 containerd[1465]: time="2025-11-01T00:16:36.334916379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 00:16:36.339478 containerd[1465]: time="2025-11-01T00:16:36.338787321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 00:16:38.739005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:16:38.776558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:16:39.130600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:16:39.147378 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:16:39.356510 kubelet[1912]: E1101 00:16:39.356422 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:16:39.364696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:16:39.364970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:16:39.969159 containerd[1465]: time="2025-11-01T00:16:39.968425620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:39.974043 containerd[1465]: time="2025-11-01T00:16:39.973895810Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 1 00:16:39.984898 containerd[1465]: time="2025-11-01T00:16:39.984667634Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:39.993211 containerd[1465]: time="2025-11-01T00:16:39.992241333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:39.995609 containerd[1465]: time="2025-11-01T00:16:39.995334317Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.656485685s" Nov 1 00:16:39.995609 containerd[1465]: time="2025-11-01T00:16:39.995401983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 00:16:39.996426 containerd[1465]: time="2025-11-01T00:16:39.996374749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 00:16:44.150196 containerd[1465]: time="2025-11-01T00:16:44.150093287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:44.153650 containerd[1465]: time="2025-11-01T00:16:44.152177566Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 1 00:16:44.156345 containerd[1465]: time="2025-11-01T00:16:44.156171187Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:44.163963 containerd[1465]: time="2025-11-01T00:16:44.163050485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:44.167612 containerd[1465]: time="2025-11-01T00:16:44.167184457Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 4.170760395s" Nov 1 00:16:44.167612 containerd[1465]: time="2025-11-01T00:16:44.167247547Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 00:16:44.181840 containerd[1465]: time="2025-11-01T00:16:44.181130768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 00:16:46.572437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736062638.mount: Deactivated successfully. Nov 1 00:16:48.604410 containerd[1465]: time="2025-11-01T00:16:48.604278538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:48.608628 containerd[1465]: time="2025-11-01T00:16:48.608151398Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 1 00:16:48.610204 containerd[1465]: time="2025-11-01T00:16:48.610113163Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:48.614663 containerd[1465]: time="2025-11-01T00:16:48.614575090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:48.615927 containerd[1465]: time="2025-11-01T00:16:48.615782013Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.434578466s" Nov 1 00:16:48.615927 containerd[1465]: time="2025-11-01T00:16:48.615847744Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 00:16:48.618103 containerd[1465]: time="2025-11-01T00:16:48.618049896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 00:16:49.452733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:16:49.459236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:16:49.471232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495990115.mount: Deactivated successfully. Nov 1 00:16:49.688280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:16:49.695623 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:16:49.854846 kubelet[1946]: E1101 00:16:49.854754 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:16:49.860595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:16:49.860936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:16:53.715775 containerd[1465]: time="2025-11-01T00:16:53.714752803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:53.717619 containerd[1465]: time="2025-11-01T00:16:53.717026032Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 1 00:16:53.719479 containerd[1465]: time="2025-11-01T00:16:53.719405824Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:53.725727 containerd[1465]: time="2025-11-01T00:16:53.725645793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:53.728367 containerd[1465]: time="2025-11-01T00:16:53.727401975Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.109297453s" Nov 1 00:16:53.728367 containerd[1465]: time="2025-11-01T00:16:53.727477362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 00:16:53.731599 containerd[1465]: time="2025-11-01T00:16:53.731543590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:16:54.450431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1314478212.mount: Deactivated successfully. Nov 1 00:16:54.462414 containerd[1465]: time="2025-11-01T00:16:54.462316081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:54.464495 containerd[1465]: time="2025-11-01T00:16:54.463638033Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:16:54.466496 containerd[1465]: time="2025-11-01T00:16:54.466437791Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:54.468811 containerd[1465]: time="2025-11-01T00:16:54.468710305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:54.469927 containerd[1465]: time="2025-11-01T00:16:54.469804415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 738.203806ms" Nov 1 00:16:54.469927 containerd[1465]: time="2025-11-01T00:16:54.469849509Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:16:54.470560 containerd[1465]: time="2025-11-01T00:16:54.470519798Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 00:16:55.031530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133421860.mount: Deactivated successfully. Nov 1 00:16:55.537238 update_engine[1453]: I20251101 00:16:55.537070 1453 update_attempter.cc:509] Updating boot flags... Nov 1 00:16:55.835896 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2026) Nov 1 00:16:55.886096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2029) Nov 1 00:16:58.647849 containerd[1465]: time="2025-11-01T00:16:58.647756046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:58.648631 containerd[1465]: time="2025-11-01T00:16:58.648560837Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 1 00:16:58.649946 containerd[1465]: time="2025-11-01T00:16:58.649916120Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:58.654024 containerd[1465]: time="2025-11-01T00:16:58.653979584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:16:58.655465 containerd[1465]: time="2025-11-01T00:16:58.655425513Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.184859308s" Nov 1 00:16:58.655524 containerd[1465]: time="2025-11-01T00:16:58.655474963Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 00:16:59.977964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:16:59.987196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:00.244694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:00.251398 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:17:00.312342 kubelet[2108]: E1101 00:17:00.312158 2108 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:00.317606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:00.317975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:02.238609 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:02.249218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:02.296273 systemd[1]: Reloading requested from client PID 2123 ('systemctl') (unit session-9.scope)... Nov 1 00:17:02.296317 systemd[1]: Reloading... Nov 1 00:17:02.417908 zram_generator::config[2167]: No configuration found. Nov 1 00:17:03.153498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:17:03.267694 systemd[1]: Reloading finished in 970 ms. Nov 1 00:17:03.337923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:03.343375 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:17:03.343705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:03.353263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:03.584695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:03.600569 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:17:03.676733 kubelet[2214]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:03.676733 kubelet[2214]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:17:03.676733 kubelet[2214]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:03.676733 kubelet[2214]: I1101 00:17:03.675180 2214 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:17:04.458888 kubelet[2214]: I1101 00:17:04.458808 2214 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:17:04.458888 kubelet[2214]: I1101 00:17:04.458905 2214 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:17:04.459293 kubelet[2214]: I1101 00:17:04.459255 2214 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:17:04.533839 kubelet[2214]: I1101 00:17:04.533776 2214 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:17:04.534433 kubelet[2214]: E1101 00:17:04.534257 2214 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:17:04.546389 kubelet[2214]: E1101 00:17:04.545981 2214 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:17:04.546389 kubelet[2214]: I1101 00:17:04.546317 2214 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:17:04.558131 kubelet[2214]: I1101 00:17:04.558059 2214 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:17:04.558786 kubelet[2214]: I1101 00:17:04.558718 2214 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:17:04.559017 kubelet[2214]: I1101 00:17:04.558780 2214 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:17:04.559187 kubelet[2214]: I1101 00:17:04.559026 2214 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:17:04.559187 kubelet[2214]: I1101 00:17:04.559036 2214 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:17:04.559296 kubelet[2214]: I1101 00:17:04.559269 2214 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:04.566027 kubelet[2214]: I1101 00:17:04.565951 2214 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:17:04.566027 kubelet[2214]: I1101 00:17:04.566017 2214 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:17:04.566250 kubelet[2214]: I1101 00:17:04.566081 2214 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:17:04.566250 kubelet[2214]: I1101 00:17:04.566116 2214 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:17:04.574766 kubelet[2214]: E1101 00:17:04.574667 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:17:04.574766 kubelet[2214]: E1101 00:17:04.574705 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:17:04.576480 kubelet[2214]: I1101 00:17:04.576440 2214 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:17:04.577168 kubelet[2214]: I1101 00:17:04.577131 2214 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:17:04.578968 kubelet[2214]: W1101 00:17:04.578927 2214 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:17:04.583360 kubelet[2214]: I1101 00:17:04.583305 2214 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:17:04.583509 kubelet[2214]: I1101 00:17:04.583398 2214 server.go:1289] "Started kubelet" Nov 1 00:17:04.584139 kubelet[2214]: I1101 00:17:04.583897 2214 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:17:04.589815 kubelet[2214]: I1101 00:17:04.585516 2214 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:17:04.589815 kubelet[2214]: I1101 00:17:04.586596 2214 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:17:04.589815 kubelet[2214]: I1101 00:17:04.585534 2214 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:17:04.589815 kubelet[2214]: I1101 00:17:04.587528 2214 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:17:04.589815 kubelet[2214]: I1101 00:17:04.587758 2214 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:17:04.590583 kubelet[2214]: E1101 00:17:04.590520 2214 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:17:04.590766 kubelet[2214]: E1101 00:17:04.590610 2214 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:04.590766 kubelet[2214]: I1101 00:17:04.590642 2214 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:17:04.591198 kubelet[2214]: I1101 00:17:04.590908 2214 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:17:04.591198 kubelet[2214]: I1101 00:17:04.591018 2214 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:17:04.591845 kubelet[2214]: E1101 00:17:04.591490 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:17:04.593548 kubelet[2214]: I1101 00:17:04.593511 2214 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:17:04.593659 kubelet[2214]: I1101 00:17:04.593635 2214 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:17:04.645123 kubelet[2214]: I1101 00:17:04.644638 2214 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:17:04.645459 kubelet[2214]: E1101 00:17:04.593217 2214 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.38:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.38:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873b9d9dc14075b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:17:04.583337819 +0000 UTC m=+0.976274700,LastTimestamp:2025-11-01 00:17:04.583337819 +0000 UTC m=+0.976274700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:17:04.648205 kubelet[2214]: E1101 00:17:04.648098 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="200ms" Nov 1 00:17:04.667380 kubelet[2214]: I1101 00:17:04.667299 2214 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:17:04.669120 kubelet[2214]: I1101 00:17:04.668934 2214 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:17:04.669120 kubelet[2214]: I1101 00:17:04.669051 2214 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:17:04.669120 kubelet[2214]: I1101 00:17:04.669089 2214 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:17:04.669120 kubelet[2214]: I1101 00:17:04.669107 2214 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:17:04.669280 kubelet[2214]: E1101 00:17:04.669165 2214 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:17:04.669668 kubelet[2214]: I1101 00:17:04.669442 2214 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:17:04.671313 kubelet[2214]: I1101 00:17:04.671252 2214 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:17:04.671439 kubelet[2214]: I1101 00:17:04.671399 2214 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:04.671586 kubelet[2214]: E1101 00:17:04.671483 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:17:04.691120 kubelet[2214]: E1101 00:17:04.691052 2214 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:04.697138 kubelet[2214]: I1101 00:17:04.697048 2214 policy_none.go:49] "None policy: Start" Nov 1 00:17:04.697138 kubelet[2214]: I1101 00:17:04.697106 2214 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:17:04.697138 kubelet[2214]: I1101 00:17:04.697160 2214 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:17:04.709532 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:17:04.728932 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:17:04.733054 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:17:04.745516 kubelet[2214]: E1101 00:17:04.745299 2214 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:17:04.746114 kubelet[2214]: I1101 00:17:04.746016 2214 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:17:04.746114 kubelet[2214]: I1101 00:17:04.746046 2214 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:17:04.748249 kubelet[2214]: E1101 00:17:04.748180 2214 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:17:04.748637 kubelet[2214]: I1101 00:17:04.748465 2214 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:17:04.748701 kubelet[2214]: E1101 00:17:04.748563 2214 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:17:04.849351 kubelet[2214]: E1101 00:17:04.849293 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="400ms" Nov 1 00:17:04.849655 kubelet[2214]: I1101 00:17:04.849504 2214 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:04.849888 kubelet[2214]: E1101 00:17:04.849831 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Nov 1 00:17:04.892494 kubelet[2214]: I1101 00:17:04.892389 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:04.892494 kubelet[2214]: I1101 00:17:04.892464 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:04.892899 kubelet[2214]: I1101 00:17:04.892528 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:04.892899 kubelet[2214]: I1101 00:17:04.892564 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:04.892899 kubelet[2214]: I1101 00:17:04.892594 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:05.052079 kubelet[2214]: I1101 00:17:05.052019 2214 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:05.052486 kubelet[2214]: E1101 00:17:05.052450 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Nov 1 00:17:05.250547 kubelet[2214]: E1101 00:17:05.250447 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="800ms" Nov 1 00:17:05.394911 kubelet[2214]: I1101 00:17:05.394711 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:05.402224 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 1 00:17:05.413700 kubelet[2214]: E1101 00:17:05.413646 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:05.414263 kubelet[2214]: E1101 00:17:05.414223 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:05.415322 containerd[1465]: time="2025-11-01T00:17:05.415226322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:05.417081 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 1 00:17:05.426627 kubelet[2214]: E1101 00:17:05.426569 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:05.431095 systemd[1]: Created slice kubepods-burstable-pod795d8dcadf5b6e441e8ff287297ba679.slice - libcontainer container kubepods-burstable-pod795d8dcadf5b6e441e8ff287297ba679.slice. Nov 1 00:17:05.440455 kubelet[2214]: E1101 00:17:05.440391 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:05.449873 kubelet[2214]: E1101 00:17:05.449800 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:17:05.456906 kubelet[2214]: I1101 00:17:05.454876 2214 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:05.456906 kubelet[2214]: E1101 00:17:05.455290 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Nov 1 00:17:05.495392 kubelet[2214]: I1101 00:17:05.495312 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:05.495561 kubelet[2214]: I1101 00:17:05.495432 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:05.495561 kubelet[2214]: I1101 00:17:05.495482 2214 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:05.617962 kubelet[2214]: E1101 00:17:05.617850 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:17:05.729724 kubelet[2214]: E1101 00:17:05.729245 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:05.734036 containerd[1465]: time="2025-11-01T00:17:05.732606046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:05.742012 kubelet[2214]: E1101 00:17:05.741967 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:05.744221 containerd[1465]: time="2025-11-01T00:17:05.742785331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:795d8dcadf5b6e441e8ff287297ba679,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:05.898228 kubelet[2214]: E1101 00:17:05.898131 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:17:05.898481 kubelet[2214]: E1101 00:17:05.898444 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:17:06.051619 kubelet[2214]: E1101 00:17:06.051395 2214 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.38:6443: connect: connection refused" interval="1.6s" Nov 1 00:17:06.144819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178421405.mount: Deactivated successfully. Nov 1 00:17:06.197688 containerd[1465]: time="2025-11-01T00:17:06.196100846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:06.204036 containerd[1465]: time="2025-11-01T00:17:06.200452035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 00:17:06.207736 containerd[1465]: time="2025-11-01T00:17:06.206819106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:06.211259 containerd[1465]: time="2025-11-01T00:17:06.209847393Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:06.212536 containerd[1465]: time="2025-11-01T00:17:06.212465292Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:06.213759 containerd[1465]: time="2025-11-01T00:17:06.213677665Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:17:06.214832 containerd[1465]: time="2025-11-01T00:17:06.214641963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:17:06.223234 containerd[1465]: time="2025-11-01T00:17:06.223146538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:17:06.225597 containerd[1465]: time="2025-11-01T00:17:06.224685882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.978463ms" Nov 1 00:17:06.227049 containerd[1465]: time="2025-11-01T00:17:06.226787774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 811.439589ms" Nov 1 00:17:06.228481 containerd[1465]: time="2025-11-01T00:17:06.228392137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.473319ms" Nov 1 00:17:06.257467 kubelet[2214]: I1101 00:17:06.257395 2214 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:06.259248 kubelet[2214]: E1101 00:17:06.258328 2214 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.38:6443/api/v1/nodes\": dial tcp 10.0.0.38:6443: connect: connection refused" node="localhost" Nov 1 00:17:06.561901 kubelet[2214]: E1101 00:17:06.561175 2214 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:17:06.582129 containerd[1465]: time="2025-11-01T00:17:06.579753337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:06.582129 containerd[1465]: time="2025-11-01T00:17:06.580795731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:06.582129 containerd[1465]: time="2025-11-01T00:17:06.580808737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.582129 containerd[1465]: time="2025-11-01T00:17:06.581830841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.585234 containerd[1465]: time="2025-11-01T00:17:06.585099225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:06.585234 containerd[1465]: time="2025-11-01T00:17:06.585180958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:06.585234 containerd[1465]: time="2025-11-01T00:17:06.585197070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.585385 containerd[1465]: time="2025-11-01T00:17:06.585274704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.586769 containerd[1465]: time="2025-11-01T00:17:06.586641384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:06.586769 containerd[1465]: time="2025-11-01T00:17:06.586733517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:06.587013 containerd[1465]: time="2025-11-01T00:17:06.586754018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.588711 containerd[1465]: time="2025-11-01T00:17:06.587255927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:06.961211 systemd[1]: Started cri-containerd-f8dec8c0a4806d3b4bdd4357f6a00d30530139888754728c992ad57c6773642f.scope - libcontainer container f8dec8c0a4806d3b4bdd4357f6a00d30530139888754728c992ad57c6773642f. Nov 1 00:17:06.965931 systemd[1]: Started cri-containerd-8d13b19e6299e62e6d4d14e40f88f9c96bfb0e700571e29f298f90aae96adc63.scope - libcontainer container 8d13b19e6299e62e6d4d14e40f88f9c96bfb0e700571e29f298f90aae96adc63. Nov 1 00:17:06.993119 systemd[1]: Started cri-containerd-bf40c34a3d542eafaf18f680b338efa1d64647afaad0b7b2e98f95dade73587f.scope - libcontainer container bf40c34a3d542eafaf18f680b338efa1d64647afaad0b7b2e98f95dade73587f. Nov 1 00:17:07.074836 containerd[1465]: time="2025-11-01T00:17:07.074774024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8dec8c0a4806d3b4bdd4357f6a00d30530139888754728c992ad57c6773642f\"" Nov 1 00:17:07.078430 kubelet[2214]: E1101 00:17:07.078287 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.090561 containerd[1465]: time="2025-11-01T00:17:07.090504818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d13b19e6299e62e6d4d14e40f88f9c96bfb0e700571e29f298f90aae96adc63\"" Nov 1 00:17:07.091479 kubelet[2214]: E1101 00:17:07.091411 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.103945 containerd[1465]: time="2025-11-01T00:17:07.103883311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:795d8dcadf5b6e441e8ff287297ba679,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf40c34a3d542eafaf18f680b338efa1d64647afaad0b7b2e98f95dade73587f\"" Nov 1 00:17:07.105369 kubelet[2214]: E1101 00:17:07.105294 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.206514 containerd[1465]: time="2025-11-01T00:17:07.206431553Z" level=info msg="CreateContainer within sandbox \"f8dec8c0a4806d3b4bdd4357f6a00d30530139888754728c992ad57c6773642f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:17:07.321015 kubelet[2214]: E1101 00:17:07.320957 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:17:07.327706 containerd[1465]: time="2025-11-01T00:17:07.327393805Z" level=info msg="CreateContainer within sandbox \"8d13b19e6299e62e6d4d14e40f88f9c96bfb0e700571e29f298f90aae96adc63\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:17:07.333894 containerd[1465]: time="2025-11-01T00:17:07.333806668Z" level=info msg="CreateContainer within sandbox \"bf40c34a3d542eafaf18f680b338efa1d64647afaad0b7b2e98f95dade73587f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:17:07.358667 containerd[1465]: time="2025-11-01T00:17:07.358573604Z" level=info msg="CreateContainer within sandbox \"f8dec8c0a4806d3b4bdd4357f6a00d30530139888754728c992ad57c6773642f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed837e487d56851708a97b7aa8dc22a8695c73346a162b6e3e4b6cc30c267a3e\"" Nov 1 00:17:07.360000 containerd[1465]: time="2025-11-01T00:17:07.359931359Z" level=info msg="StartContainer for \"ed837e487d56851708a97b7aa8dc22a8695c73346a162b6e3e4b6cc30c267a3e\"" Nov 1 00:17:07.370880 containerd[1465]: time="2025-11-01T00:17:07.370803305Z" level=info msg="CreateContainer within sandbox \"8d13b19e6299e62e6d4d14e40f88f9c96bfb0e700571e29f298f90aae96adc63\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3f7794a1c08057e697255f1bd29c7a3dd52e6671d253970c5145534f68b5ca5\"" Nov 1 00:17:07.371686 containerd[1465]: time="2025-11-01T00:17:07.371646349Z" level=info msg="StartContainer for \"e3f7794a1c08057e697255f1bd29c7a3dd52e6671d253970c5145534f68b5ca5\"" Nov 1 00:17:07.377126 containerd[1465]: time="2025-11-01T00:17:07.376979679Z" level=info msg="CreateContainer within sandbox \"bf40c34a3d542eafaf18f680b338efa1d64647afaad0b7b2e98f95dade73587f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1696b88dd6c62077823b398f5a04b3b73726368f9ef3d112a495d41accf76ec1\"" Nov 1 00:17:07.378066 containerd[1465]: time="2025-11-01T00:17:07.378008903Z" level=info msg="StartContainer for \"1696b88dd6c62077823b398f5a04b3b73726368f9ef3d112a495d41accf76ec1\"" Nov 1 00:17:07.408089 systemd[1]: Started cri-containerd-ed837e487d56851708a97b7aa8dc22a8695c73346a162b6e3e4b6cc30c267a3e.scope - libcontainer container ed837e487d56851708a97b7aa8dc22a8695c73346a162b6e3e4b6cc30c267a3e. Nov 1 00:17:07.413328 systemd[1]: Started cri-containerd-e3f7794a1c08057e697255f1bd29c7a3dd52e6671d253970c5145534f68b5ca5.scope - libcontainer container e3f7794a1c08057e697255f1bd29c7a3dd52e6671d253970c5145534f68b5ca5. Nov 1 00:17:07.427007 systemd[1]: Started cri-containerd-1696b88dd6c62077823b398f5a04b3b73726368f9ef3d112a495d41accf76ec1.scope - libcontainer container 1696b88dd6c62077823b398f5a04b3b73726368f9ef3d112a495d41accf76ec1. Nov 1 00:17:07.497548 containerd[1465]: time="2025-11-01T00:17:07.497309567Z" level=info msg="StartContainer for \"ed837e487d56851708a97b7aa8dc22a8695c73346a162b6e3e4b6cc30c267a3e\" returns successfully" Nov 1 00:17:07.507137 containerd[1465]: time="2025-11-01T00:17:07.507005778Z" level=info msg="StartContainer for \"e3f7794a1c08057e697255f1bd29c7a3dd52e6671d253970c5145534f68b5ca5\" returns successfully" Nov 1 00:17:07.512421 containerd[1465]: time="2025-11-01T00:17:07.512375230Z" level=info msg="StartContainer for \"1696b88dd6c62077823b398f5a04b3b73726368f9ef3d112a495d41accf76ec1\" returns successfully" Nov 1 00:17:07.578826 kubelet[2214]: E1101 00:17:07.577697 2214 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:17:07.685045 kubelet[2214]: E1101 00:17:07.684990 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:07.685357 kubelet[2214]: E1101 00:17:07.685223 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.694755 kubelet[2214]: E1101 00:17:07.694707 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:07.694949 kubelet[2214]: E1101 00:17:07.694898 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.699424 kubelet[2214]: E1101 00:17:07.699377 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:07.699618 kubelet[2214]: E1101 00:17:07.699592 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:07.865185 kubelet[2214]: I1101 00:17:07.863792 2214 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:08.697220 kubelet[2214]: E1101 00:17:08.697166 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:08.699401 kubelet[2214]: E1101 00:17:08.697342 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:08.702495 kubelet[2214]: E1101 00:17:08.701766 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:08.702495 kubelet[2214]: E1101 00:17:08.701993 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:08.702495 kubelet[2214]: E1101 00:17:08.702322 2214 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:17:08.702495 kubelet[2214]: E1101 00:17:08.702436 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:10.297573 kubelet[2214]: E1101 00:17:10.297463 2214 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:17:10.545035 kubelet[2214]: E1101 00:17:10.544735 2214 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1873b9d9dc14075b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:17:04.583337819 +0000 UTC m=+0.976274700,LastTimestamp:2025-11-01 00:17:04.583337819 +0000 UTC m=+0.976274700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:17:10.551580 kubelet[2214]: I1101 00:17:10.551358 2214 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:17:10.551580 kubelet[2214]: E1101 00:17:10.551452 2214 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:17:10.587501 kubelet[2214]: E1101 00:17:10.587445 2214 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:10.687812 kubelet[2214]: E1101 00:17:10.687769 2214 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:10.789018 kubelet[2214]: E1101 00:17:10.788951 2214 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:10.893073 kubelet[2214]: I1101 00:17:10.892546 2214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:10.900928 kubelet[2214]: E1101 00:17:10.900699 2214 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:10.900928 kubelet[2214]: I1101 00:17:10.900751 2214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:10.902378 kubelet[2214]: E1101 00:17:10.902347 2214 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:10.902378 kubelet[2214]: I1101 00:17:10.902373 2214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:10.904088 kubelet[2214]: E1101 00:17:10.903984 2214 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:11.571826 kubelet[2214]: I1101 00:17:11.571737 2214 apiserver.go:52] "Watching apiserver" Nov 1 00:17:11.592151 kubelet[2214]: I1101 00:17:11.592074 2214 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:17:12.788880 kubelet[2214]: I1101 00:17:12.788817 2214 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:12.818408 kubelet[2214]: E1101 00:17:12.818285 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:13.712935 kubelet[2214]: E1101 00:17:13.711783 2214 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:13.924467 systemd[1]: Reloading requested from client PID 2507 ('systemctl') (unit session-9.scope)... Nov 1 00:17:13.924493 systemd[1]: Reloading... Nov 1 00:17:14.101919 zram_generator::config[2549]: No configuration found. Nov 1 00:17:14.368808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:17:14.538596 systemd[1]: Reloading finished in 613 ms. Nov 1 00:17:14.626569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:14.646814 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:17:14.647290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:14.647371 systemd[1]: kubelet.service: Consumed 1.688s CPU time, 134.6M memory peak, 0B memory swap peak. Nov 1 00:17:14.658262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:17:15.064524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:17:15.073177 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:17:15.148331 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:15.148331 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:17:15.148331 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:17:15.149004 kubelet[2591]: I1101 00:17:15.148375 2591 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:17:15.171488 kubelet[2591]: I1101 00:17:15.168850 2591 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 00:17:15.171488 kubelet[2591]: I1101 00:17:15.169043 2591 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:17:15.171488 kubelet[2591]: I1101 00:17:15.169533 2591 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:17:15.174707 kubelet[2591]: I1101 00:17:15.174002 2591 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:17:15.183633 kubelet[2591]: I1101 00:17:15.183497 2591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:17:15.208903 kubelet[2591]: E1101 00:17:15.208182 2591 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:17:15.208903 kubelet[2591]: I1101 00:17:15.208242 2591 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:17:15.223330 kubelet[2591]: I1101 00:17:15.219430 2591 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:17:15.223330 kubelet[2591]: I1101 00:17:15.219730 2591 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:17:15.223330 kubelet[2591]: I1101 00:17:15.219765 2591 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:17:15.223330 kubelet[2591]: I1101 00:17:15.220034 2591 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220052 2591 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220153 2591 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220387 2591 kubelet.go:480] "Attempting to sync node with API server" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220404 2591 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220431 2591 kubelet.go:386] "Adding apiserver pod source" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.220452 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:17:15.223704 kubelet[2591]: I1101 00:17:15.222012 2591 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:17:15.226431 kubelet[2591]: I1101 00:17:15.226250 2591 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:17:15.239590 kubelet[2591]: I1101 00:17:15.237762 2591 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:17:15.239590 kubelet[2591]: I1101 00:17:15.237842 2591 server.go:1289] "Started kubelet" Nov 1 00:17:15.240270 kubelet[2591]: I1101 00:17:15.240044 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:17:15.240460 kubelet[2591]: I1101 00:17:15.240410 2591 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:17:15.240738 kubelet[2591]: I1101 00:17:15.240687 2591 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:17:15.244260 kubelet[2591]: I1101 00:17:15.241284 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:17:15.245163 kubelet[2591]: I1101 00:17:15.245118 2591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:17:15.256013 kubelet[2591]: I1101 00:17:15.253990 2591 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:17:15.256013 kubelet[2591]: E1101 00:17:15.254363 2591 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:17:15.256013 kubelet[2591]: I1101 00:17:15.255044 2591 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:17:15.256013 kubelet[2591]: I1101 00:17:15.255246 2591 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:17:15.265283 kubelet[2591]: I1101 00:17:15.263224 2591 server.go:317] "Adding debug handlers to kubelet server" Nov 1 00:17:15.273249 kubelet[2591]: I1101 00:17:15.271388 2591 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:17:15.273249 kubelet[2591]: I1101 00:17:15.271652 2591 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:17:15.273249 kubelet[2591]: E1101 00:17:15.271675 2591 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:17:15.273249 kubelet[2591]: I1101 00:17:15.271803 2591 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:17:15.275597 kubelet[2591]: I1101 00:17:15.275455 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 00:17:15.282111 kubelet[2591]: I1101 00:17:15.281294 2591 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 00:17:15.282111 kubelet[2591]: I1101 00:17:15.281337 2591 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 00:17:15.282111 kubelet[2591]: I1101 00:17:15.281378 2591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:17:15.282111 kubelet[2591]: I1101 00:17:15.281388 2591 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 00:17:15.282111 kubelet[2591]: E1101 00:17:15.281448 2591 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.328845 2591 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.328889 2591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.328949 2591 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329231 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329248 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329273 2591 policy_none.go:49] "None policy: Start" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329286 2591 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329303 2591 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:17:15.329826 kubelet[2591]: I1101 00:17:15.329458 2591 state_mem.go:75] "Updated machine memory state" Nov 1 00:17:15.335904 kubelet[2591]: E1101 00:17:15.335877 2591 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:17:15.336536 kubelet[2591]: I1101 00:17:15.336515 2591 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:17:15.337839 kubelet[2591]: I1101 00:17:15.336696 2591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:17:15.337839 kubelet[2591]: I1101 00:17:15.337664 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:17:15.338526 kubelet[2591]: E1101 00:17:15.338499 2591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:17:15.383714 kubelet[2591]: I1101 00:17:15.383650 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:15.383972 kubelet[2591]: I1101 00:17:15.383671 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.384543 kubelet[2591]: I1101 00:17:15.384525 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:15.414237 kubelet[2591]: E1101 00:17:15.414145 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:15.451377 kubelet[2591]: I1101 00:17:15.450702 2591 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:17:15.460268 kubelet[2591]: I1101 00:17:15.459783 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.460268 kubelet[2591]: I1101 00:17:15.459832 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:15.460268 kubelet[2591]: I1101 00:17:15.459884 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.460268 kubelet[2591]: I1101 00:17:15.459905 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.460268 kubelet[2591]: I1101 00:17:15.459931 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.460581 kubelet[2591]: I1101 00:17:15.459956 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:15.460581 kubelet[2591]: I1101 00:17:15.459990 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:15.460581 kubelet[2591]: I1101 00:17:15.460007 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/795d8dcadf5b6e441e8ff287297ba679-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"795d8dcadf5b6e441e8ff287297ba679\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:15.460581 kubelet[2591]: I1101 00:17:15.460026 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:17:15.491114 kubelet[2591]: I1101 00:17:15.489128 2591 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:17:15.491114 kubelet[2591]: I1101 00:17:15.489278 2591 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:17:15.711759 kubelet[2591]: E1101 00:17:15.708807 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:15.716051 kubelet[2591]: E1101 00:17:15.715959 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:15.724332 kubelet[2591]: E1101 00:17:15.724257 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:16.225427 kubelet[2591]: I1101 00:17:16.225360 2591 apiserver.go:52] "Watching apiserver" Nov 1 00:17:16.257372 kubelet[2591]: I1101 00:17:16.257092 2591 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:17:16.297830 kubelet[2591]: I1101 00:17:16.297766 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:16.298527 kubelet[2591]: E1101 00:17:16.297893 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:16.298781 kubelet[2591]: I1101 00:17:16.298518 2591 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:16.395905 kubelet[2591]: E1101 00:17:16.393231 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:17:16.395905 kubelet[2591]: E1101 00:17:16.393409 2591 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:17:16.395905 kubelet[2591]: E1101 00:17:16.393619 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:16.395905 kubelet[2591]: E1101 00:17:16.393744 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:16.446924 kubelet[2591]: I1101 00:17:16.446809 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.446782405 podStartE2EDuration="4.446782405s" podCreationTimestamp="2025-11-01 00:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:16.39856752 +0000 UTC m=+1.317564276" watchObservedRunningTime="2025-11-01 00:17:16.446782405 +0000 UTC m=+1.365779129" Nov 1 00:17:16.506843 kubelet[2591]: I1101 00:17:16.506555 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5065318269999999 podStartE2EDuration="1.506531827s" podCreationTimestamp="2025-11-01 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:16.447800406 +0000 UTC m=+1.366797130" watchObservedRunningTime="2025-11-01 00:17:16.506531827 +0000 UTC m=+1.425528561" Nov 1 00:17:16.506843 kubelet[2591]: I1101 00:17:16.506669 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.506664727 podStartE2EDuration="1.506664727s" podCreationTimestamp="2025-11-01 00:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:16.495203231 +0000 UTC m=+1.414199965" watchObservedRunningTime="2025-11-01 00:17:16.506664727 +0000 UTC m=+1.425661451" Nov 1 00:17:17.301228 kubelet[2591]: E1101 00:17:17.301158 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:17.301832 kubelet[2591]: E1101 00:17:17.301535 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:18.303536 kubelet[2591]: E1101 00:17:18.303464 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:19.309047 kubelet[2591]: E1101 00:17:19.308962 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:19.410450 kubelet[2591]: I1101 00:17:19.410383 2591 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:17:19.411058 containerd[1465]: time="2025-11-01T00:17:19.411005422Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:17:19.413376 kubelet[2591]: I1101 00:17:19.413271 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:17:19.990121 systemd[1]: Created slice kubepods-besteffort-pod231e6513_173c_4bfd_a595_2c2917ddd676.slice - libcontainer container kubepods-besteffort-pod231e6513_173c_4bfd_a595_2c2917ddd676.slice. Nov 1 00:17:20.008880 systemd[1]: Created slice kubepods-besteffort-pod45b43757_3fd5_4069_93d1_565b0a2ba56d.slice - libcontainer container kubepods-besteffort-pod45b43757_3fd5_4069_93d1_565b0a2ba56d.slice. Nov 1 00:17:20.028780 kubelet[2591]: I1101 00:17:20.028635 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptx8d\" (UniqueName: \"kubernetes.io/projected/45b43757-3fd5-4069-93d1-565b0a2ba56d-kube-api-access-ptx8d\") pod \"kube-proxy-65sg5\" (UID: \"45b43757-3fd5-4069-93d1-565b0a2ba56d\") " pod="kube-system/kube-proxy-65sg5" Nov 1 00:17:20.028780 kubelet[2591]: I1101 00:17:20.028743 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45b43757-3fd5-4069-93d1-565b0a2ba56d-lib-modules\") pod \"kube-proxy-65sg5\" (UID: \"45b43757-3fd5-4069-93d1-565b0a2ba56d\") " pod="kube-system/kube-proxy-65sg5" Nov 1 00:17:20.028780 kubelet[2591]: I1101 00:17:20.028770 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/231e6513-173c-4bfd-a595-2c2917ddd676-var-lib-calico\") pod \"tigera-operator-7dcd859c48-64nzl\" (UID: \"231e6513-173c-4bfd-a595-2c2917ddd676\") " pod="tigera-operator/tigera-operator-7dcd859c48-64nzl" Nov 1 00:17:20.028780 kubelet[2591]: I1101 00:17:20.028787 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7pm2\" (UniqueName: \"kubernetes.io/projected/231e6513-173c-4bfd-a595-2c2917ddd676-kube-api-access-v7pm2\") pod \"tigera-operator-7dcd859c48-64nzl\" (UID: \"231e6513-173c-4bfd-a595-2c2917ddd676\") " pod="tigera-operator/tigera-operator-7dcd859c48-64nzl" Nov 1 00:17:20.029076 kubelet[2591]: I1101 00:17:20.028912 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45b43757-3fd5-4069-93d1-565b0a2ba56d-xtables-lock\") pod \"kube-proxy-65sg5\" (UID: \"45b43757-3fd5-4069-93d1-565b0a2ba56d\") " pod="kube-system/kube-proxy-65sg5" Nov 1 00:17:20.029076 kubelet[2591]: I1101 00:17:20.028934 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45b43757-3fd5-4069-93d1-565b0a2ba56d-kube-proxy\") pod \"kube-proxy-65sg5\" (UID: \"45b43757-3fd5-4069-93d1-565b0a2ba56d\") " pod="kube-system/kube-proxy-65sg5" Nov 1 00:17:20.044584 kubelet[2591]: E1101 00:17:20.044491 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:20.305146 containerd[1465]: time="2025-11-01T00:17:20.304972743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-64nzl,Uid:231e6513-173c-4bfd-a595-2c2917ddd676,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:17:20.311507 kubelet[2591]: E1101 00:17:20.311437 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:20.313134 kubelet[2591]: E1101 00:17:20.313070 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:20.313650 containerd[1465]: time="2025-11-01T00:17:20.313586766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65sg5,Uid:45b43757-3fd5-4069-93d1-565b0a2ba56d,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:20.448488 containerd[1465]: time="2025-11-01T00:17:20.447996312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:20.448488 containerd[1465]: time="2025-11-01T00:17:20.448118360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:20.448488 containerd[1465]: time="2025-11-01T00:17:20.448137648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.448488 containerd[1465]: time="2025-11-01T00:17:20.448276609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.464533 containerd[1465]: time="2025-11-01T00:17:20.462631609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:20.464533 containerd[1465]: time="2025-11-01T00:17:20.462729269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:20.464533 containerd[1465]: time="2025-11-01T00:17:20.462760980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.464533 containerd[1465]: time="2025-11-01T00:17:20.462914369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:20.514201 systemd[1]: Started cri-containerd-567c8f981b163de29a3efcc4888e980dc75ff3777d4f2b70fe6c195c32724002.scope - libcontainer container 567c8f981b163de29a3efcc4888e980dc75ff3777d4f2b70fe6c195c32724002. Nov 1 00:17:20.523255 systemd[1]: Started cri-containerd-57538e1589e3560418638224945ca0fec0d3fc73f557a958529f61de4e74a057.scope - libcontainer container 57538e1589e3560418638224945ca0fec0d3fc73f557a958529f61de4e74a057. Nov 1 00:17:20.578228 containerd[1465]: time="2025-11-01T00:17:20.577664588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-65sg5,Uid:45b43757-3fd5-4069-93d1-565b0a2ba56d,Namespace:kube-system,Attempt:0,} returns sandbox id \"57538e1589e3560418638224945ca0fec0d3fc73f557a958529f61de4e74a057\"" Nov 1 00:17:20.582626 kubelet[2591]: E1101 00:17:20.581634 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:20.600032 containerd[1465]: time="2025-11-01T00:17:20.599979897Z" level=info msg="CreateContainer within sandbox \"57538e1589e3560418638224945ca0fec0d3fc73f557a958529f61de4e74a057\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:17:20.620920 containerd[1465]: time="2025-11-01T00:17:20.620746913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-64nzl,Uid:231e6513-173c-4bfd-a595-2c2917ddd676,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"567c8f981b163de29a3efcc4888e980dc75ff3777d4f2b70fe6c195c32724002\"" Nov 1 00:17:20.623819 containerd[1465]: time="2025-11-01T00:17:20.623769408Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:17:20.643180 containerd[1465]: time="2025-11-01T00:17:20.642937971Z" level=info msg="CreateContainer within sandbox \"57538e1589e3560418638224945ca0fec0d3fc73f557a958529f61de4e74a057\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef5f70d723c737bba59d814465ac1b5915c186188bf387ca4b936e753fd4631b\"" Nov 1 00:17:20.644020 containerd[1465]: time="2025-11-01T00:17:20.643980861Z" level=info msg="StartContainer for \"ef5f70d723c737bba59d814465ac1b5915c186188bf387ca4b936e753fd4631b\"" Nov 1 00:17:20.720173 systemd[1]: Started cri-containerd-ef5f70d723c737bba59d814465ac1b5915c186188bf387ca4b936e753fd4631b.scope - libcontainer container ef5f70d723c737bba59d814465ac1b5915c186188bf387ca4b936e753fd4631b. Nov 1 00:17:20.798552 containerd[1465]: time="2025-11-01T00:17:20.797256802Z" level=info msg="StartContainer for \"ef5f70d723c737bba59d814465ac1b5915c186188bf387ca4b936e753fd4631b\" returns successfully" Nov 1 00:17:21.319985 kubelet[2591]: E1101 00:17:21.318763 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:21.334080 kubelet[2591]: I1101 00:17:21.333982 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-65sg5" podStartSLOduration=2.333955201 podStartE2EDuration="2.333955201s" podCreationTimestamp="2025-11-01 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:17:21.333330125 +0000 UTC m=+6.252326839" watchObservedRunningTime="2025-11-01 00:17:21.333955201 +0000 UTC m=+6.252951925" Nov 1 00:17:22.405386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276911814.mount: Deactivated successfully. Nov 1 00:17:23.543711 containerd[1465]: time="2025-11-01T00:17:23.543550717Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:23.545030 containerd[1465]: time="2025-11-01T00:17:23.544914956Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:17:23.546557 containerd[1465]: time="2025-11-01T00:17:23.546503211Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:23.551520 containerd[1465]: time="2025-11-01T00:17:23.551427173Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:23.552438 containerd[1465]: time="2025-11-01T00:17:23.552376476Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.928540589s" Nov 1 00:17:23.552438 containerd[1465]: time="2025-11-01T00:17:23.552429971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:17:23.558845 containerd[1465]: time="2025-11-01T00:17:23.558782536Z" level=info msg="CreateContainer within sandbox \"567c8f981b163de29a3efcc4888e980dc75ff3777d4f2b70fe6c195c32724002\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:17:23.576562 containerd[1465]: time="2025-11-01T00:17:23.576494372Z" level=info msg="CreateContainer within sandbox \"567c8f981b163de29a3efcc4888e980dc75ff3777d4f2b70fe6c195c32724002\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db\"" Nov 1 00:17:23.577295 containerd[1465]: time="2025-11-01T00:17:23.577266130Z" level=info msg="StartContainer for \"7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db\"" Nov 1 00:17:23.614177 systemd[1]: run-containerd-runc-k8s.io-7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db-runc.QBesFV.mount: Deactivated successfully. Nov 1 00:17:23.628192 systemd[1]: Started cri-containerd-7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db.scope - libcontainer container 7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db. Nov 1 00:17:23.664351 containerd[1465]: time="2025-11-01T00:17:23.664255589Z" level=info msg="StartContainer for \"7e92c4b7f9292b48bc6dca8d1fd7d61b705b164cff75b632e21d4d86be7fe9db\" returns successfully" Nov 1 00:17:24.343278 kubelet[2591]: I1101 00:17:24.343161 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-64nzl" podStartSLOduration=2.412836328 podStartE2EDuration="5.343136647s" podCreationTimestamp="2025-11-01 00:17:19 +0000 UTC" firstStartedPulling="2025-11-01 00:17:20.623317377 +0000 UTC m=+5.542314102" lastFinishedPulling="2025-11-01 00:17:23.553617697 +0000 UTC m=+8.472614421" observedRunningTime="2025-11-01 00:17:24.342899628 +0000 UTC m=+9.261896362" watchObservedRunningTime="2025-11-01 00:17:24.343136647 +0000 UTC m=+9.262133381" Nov 1 00:17:24.540399 kubelet[2591]: E1101 00:17:24.540138 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:25.331992 kubelet[2591]: E1101 00:17:25.331950 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:29.799695 sudo[1658]: pam_unix(sudo:session): session closed for user root Nov 1 00:17:29.803027 sshd[1655]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:29.808747 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:17:29.810596 systemd[1]: sshd@8-10.0.0.38:22-10.0.0.1:34570.service: Deactivated successfully. Nov 1 00:17:29.814416 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:17:29.814737 systemd[1]: session-9.scope: Consumed 8.290s CPU time, 164.9M memory peak, 0B memory swap peak. Nov 1 00:17:29.816270 systemd-logind[1452]: Removed session 9. Nov 1 00:17:34.486654 systemd[1]: Created slice kubepods-besteffort-podd55db833_c245_4f24_b907_448d6739acc5.slice - libcontainer container kubepods-besteffort-podd55db833_c245_4f24_b907_448d6739acc5.slice. Nov 1 00:17:34.636370 kubelet[2591]: I1101 00:17:34.636301 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d55db833-c245-4f24-b907-448d6739acc5-tigera-ca-bundle\") pod \"calico-typha-8494d8f865-7qppr\" (UID: \"d55db833-c245-4f24-b907-448d6739acc5\") " pod="calico-system/calico-typha-8494d8f865-7qppr" Nov 1 00:17:34.636370 kubelet[2591]: I1101 00:17:34.636380 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d55db833-c245-4f24-b907-448d6739acc5-typha-certs\") pod \"calico-typha-8494d8f865-7qppr\" (UID: \"d55db833-c245-4f24-b907-448d6739acc5\") " pod="calico-system/calico-typha-8494d8f865-7qppr" Nov 1 00:17:34.636990 kubelet[2591]: I1101 00:17:34.636418 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdwr9\" (UniqueName: \"kubernetes.io/projected/d55db833-c245-4f24-b907-448d6739acc5-kube-api-access-jdwr9\") pod \"calico-typha-8494d8f865-7qppr\" (UID: \"d55db833-c245-4f24-b907-448d6739acc5\") " pod="calico-system/calico-typha-8494d8f865-7qppr" Nov 1 00:17:34.672452 systemd[1]: Created slice kubepods-besteffort-podba165251_79ae_4422_b21b_5d584ee72bde.slice - libcontainer container kubepods-besteffort-podba165251_79ae_4422_b21b_5d584ee72bde.slice. Nov 1 00:17:34.839115 kubelet[2591]: I1101 00:17:34.838933 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-policysync\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839115 kubelet[2591]: I1101 00:17:34.839019 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba165251-79ae-4422-b21b-5d584ee72bde-tigera-ca-bundle\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839115 kubelet[2591]: I1101 00:17:34.839047 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-cni-log-dir\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839115 kubelet[2591]: I1101 00:17:34.839070 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfwfm\" (UniqueName: \"kubernetes.io/projected/ba165251-79ae-4422-b21b-5d584ee72bde-kube-api-access-zfwfm\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839115 kubelet[2591]: I1101 00:17:34.839104 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-cni-bin-dir\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839420 kubelet[2591]: I1101 00:17:34.839128 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-lib-modules\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839420 kubelet[2591]: I1101 00:17:34.839152 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba165251-79ae-4422-b21b-5d584ee72bde-node-certs\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839420 kubelet[2591]: I1101 00:17:34.839176 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-cni-net-dir\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839420 kubelet[2591]: I1101 00:17:34.839196 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-var-lib-calico\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839420 kubelet[2591]: I1101 00:17:34.839218 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-xtables-lock\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839565 kubelet[2591]: I1101 00:17:34.839247 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-flexvol-driver-host\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.839565 kubelet[2591]: I1101 00:17:34.839268 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba165251-79ae-4422-b21b-5d584ee72bde-var-run-calico\") pod \"calico-node-8hlvg\" (UID: \"ba165251-79ae-4422-b21b-5d584ee72bde\") " pod="calico-system/calico-node-8hlvg" Nov 1 00:17:34.994885 kubelet[2591]: E1101 00:17:34.993951 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:34.994885 kubelet[2591]: W1101 00:17:34.993979 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:34.994885 kubelet[2591]: E1101 00:17:34.994023 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.030327 kubelet[2591]: E1101 00:17:35.030234 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:35.044762 kubelet[2591]: E1101 00:17:35.044717 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.044762 kubelet[2591]: W1101 00:17:35.044740 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.044762 kubelet[2591]: E1101 00:17:35.044763 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.045049 kubelet[2591]: E1101 00:17:35.045018 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.045049 kubelet[2591]: W1101 00:17:35.045029 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.045049 kubelet[2591]: E1101 00:17:35.045040 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.045470 kubelet[2591]: E1101 00:17:35.045375 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.045470 kubelet[2591]: W1101 00:17:35.045387 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.045470 kubelet[2591]: E1101 00:17:35.045409 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.045704 kubelet[2591]: E1101 00:17:35.045685 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.045766 kubelet[2591]: W1101 00:17:35.045698 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.045766 kubelet[2591]: E1101 00:17:35.045722 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.046009 kubelet[2591]: E1101 00:17:35.045979 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.046009 kubelet[2591]: W1101 00:17:35.045990 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.046009 kubelet[2591]: E1101 00:17:35.046012 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.046243 kubelet[2591]: E1101 00:17:35.046214 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.046243 kubelet[2591]: W1101 00:17:35.046224 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.046243 kubelet[2591]: E1101 00:17:35.046233 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.046572 kubelet[2591]: E1101 00:17:35.046435 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.046572 kubelet[2591]: W1101 00:17:35.046452 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.046572 kubelet[2591]: E1101 00:17:35.046465 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.046791 kubelet[2591]: E1101 00:17:35.046756 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.046791 kubelet[2591]: W1101 00:17:35.046774 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.046791 kubelet[2591]: E1101 00:17:35.046787 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.047126 kubelet[2591]: E1101 00:17:35.047111 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.047126 kubelet[2591]: W1101 00:17:35.047124 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.047221 kubelet[2591]: E1101 00:17:35.047137 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.047414 kubelet[2591]: E1101 00:17:35.047395 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.047414 kubelet[2591]: W1101 00:17:35.047407 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.047502 kubelet[2591]: E1101 00:17:35.047418 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.047681 kubelet[2591]: E1101 00:17:35.047663 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.047681 kubelet[2591]: W1101 00:17:35.047675 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.047774 kubelet[2591]: E1101 00:17:35.047686 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.047989 kubelet[2591]: E1101 00:17:35.047970 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.047989 kubelet[2591]: W1101 00:17:35.047982 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.048086 kubelet[2591]: E1101 00:17:35.048008 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.048284 kubelet[2591]: E1101 00:17:35.048265 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.048284 kubelet[2591]: W1101 00:17:35.048277 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.048370 kubelet[2591]: E1101 00:17:35.048289 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.048536 kubelet[2591]: E1101 00:17:35.048518 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.048536 kubelet[2591]: W1101 00:17:35.048530 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.048611 kubelet[2591]: E1101 00:17:35.048541 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.048807 kubelet[2591]: E1101 00:17:35.048789 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.048807 kubelet[2591]: W1101 00:17:35.048800 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.048951 kubelet[2591]: E1101 00:17:35.048811 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.049072 kubelet[2591]: E1101 00:17:35.049053 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.049072 kubelet[2591]: W1101 00:17:35.049065 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.049159 kubelet[2591]: E1101 00:17:35.049077 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.049349 kubelet[2591]: E1101 00:17:35.049331 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.049349 kubelet[2591]: W1101 00:17:35.049343 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.049449 kubelet[2591]: E1101 00:17:35.049354 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.049582 kubelet[2591]: E1101 00:17:35.049563 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.049582 kubelet[2591]: W1101 00:17:35.049575 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.049648 kubelet[2591]: E1101 00:17:35.049586 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.049875 kubelet[2591]: E1101 00:17:35.049842 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.049943 kubelet[2591]: W1101 00:17:35.049885 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.049943 kubelet[2591]: E1101 00:17:35.049901 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.050143 kubelet[2591]: E1101 00:17:35.050125 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.050143 kubelet[2591]: W1101 00:17:35.050136 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.050217 kubelet[2591]: E1101 00:17:35.050148 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.090330 kubelet[2591]: E1101 00:17:35.090166 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:35.091795 containerd[1465]: time="2025-11-01T00:17:35.091503564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8494d8f865-7qppr,Uid:d55db833-c245-4f24-b907-448d6739acc5,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:35.125423 containerd[1465]: time="2025-11-01T00:17:35.125225321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:35.125633 containerd[1465]: time="2025-11-01T00:17:35.125453442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:35.125633 containerd[1465]: time="2025-11-01T00:17:35.125500933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:35.125731 containerd[1465]: time="2025-11-01T00:17:35.125637556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:35.142613 kubelet[2591]: E1101 00:17:35.142555 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.142613 kubelet[2591]: W1101 00:17:35.142602 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.142912 kubelet[2591]: E1101 00:17:35.142640 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.142912 kubelet[2591]: I1101 00:17:35.142699 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cddeab39-52b2-4e4d-8121-8c667fc57977-socket-dir\") pod \"csi-node-driver-v865s\" (UID: \"cddeab39-52b2-4e4d-8121-8c667fc57977\") " pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:35.143138 kubelet[2591]: E1101 00:17:35.143110 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.143138 kubelet[2591]: W1101 00:17:35.143135 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.143225 kubelet[2591]: E1101 00:17:35.143150 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.143225 kubelet[2591]: I1101 00:17:35.143176 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cddeab39-52b2-4e4d-8121-8c667fc57977-registration-dir\") pod \"csi-node-driver-v865s\" (UID: \"cddeab39-52b2-4e4d-8121-8c667fc57977\") " pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:35.143612 kubelet[2591]: E1101 00:17:35.143573 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.143664 kubelet[2591]: W1101 00:17:35.143601 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.143705 kubelet[2591]: E1101 00:17:35.143665 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.148885 kubelet[2591]: E1101 00:17:35.145977 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.148885 kubelet[2591]: W1101 00:17:35.145995 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.148885 kubelet[2591]: E1101 00:17:35.146041 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.148885 kubelet[2591]: E1101 00:17:35.146407 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.148885 kubelet[2591]: W1101 00:17:35.146418 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.148885 kubelet[2591]: E1101 00:17:35.146432 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.148885 kubelet[2591]: I1101 00:17:35.146496 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cddeab39-52b2-4e4d-8121-8c667fc57977-kubelet-dir\") pod \"csi-node-driver-v865s\" (UID: \"cddeab39-52b2-4e4d-8121-8c667fc57977\") " pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:35.148885 kubelet[2591]: E1101 00:17:35.146887 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.148885 kubelet[2591]: W1101 00:17:35.146902 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.146157 systemd[1]: Started cri-containerd-96e1d7548d0c88498585439b888f2828fd0b2a9fb63484fbb0f1e05bf06a289d.scope - libcontainer container 96e1d7548d0c88498585439b888f2828fd0b2a9fb63484fbb0f1e05bf06a289d. Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.146915 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.147255 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.149336 kubelet[2591]: W1101 00:17:35.147268 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.147281 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.148474 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.149336 kubelet[2591]: W1101 00:17:35.148486 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.148497 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.149336 kubelet[2591]: I1101 00:17:35.148532 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cddeab39-52b2-4e4d-8121-8c667fc57977-varrun\") pod \"csi-node-driver-v865s\" (UID: \"cddeab39-52b2-4e4d-8121-8c667fc57977\") " pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:35.149336 kubelet[2591]: E1101 00:17:35.149083 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.149627 kubelet[2591]: W1101 00:17:35.149094 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.149627 kubelet[2591]: E1101 00:17:35.149105 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.149627 kubelet[2591]: I1101 00:17:35.149248 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5vh2\" (UniqueName: \"kubernetes.io/projected/cddeab39-52b2-4e4d-8121-8c667fc57977-kube-api-access-d5vh2\") pod \"csi-node-driver-v865s\" (UID: \"cddeab39-52b2-4e4d-8121-8c667fc57977\") " pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:35.149742 kubelet[2591]: E1101 00:17:35.149646 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.149742 kubelet[2591]: W1101 00:17:35.149657 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.149742 kubelet[2591]: E1101 00:17:35.149668 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.150515 kubelet[2591]: E1101 00:17:35.150495 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.150515 kubelet[2591]: W1101 00:17:35.150510 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.150515 kubelet[2591]: E1101 00:17:35.150520 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.150907 kubelet[2591]: E1101 00:17:35.150879 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.150907 kubelet[2591]: W1101 00:17:35.150900 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.150907 kubelet[2591]: E1101 00:17:35.150910 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.151253 kubelet[2591]: E1101 00:17:35.151231 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.151253 kubelet[2591]: W1101 00:17:35.151246 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.151253 kubelet[2591]: E1101 00:17:35.151257 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.152543 kubelet[2591]: E1101 00:17:35.152516 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.152543 kubelet[2591]: W1101 00:17:35.152537 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.152543 kubelet[2591]: E1101 00:17:35.152549 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.152881 kubelet[2591]: E1101 00:17:35.152847 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.152881 kubelet[2591]: W1101 00:17:35.152878 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.153016 kubelet[2591]: E1101 00:17:35.152891 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.194946 containerd[1465]: time="2025-11-01T00:17:35.194889129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8494d8f865-7qppr,Uid:d55db833-c245-4f24-b907-448d6739acc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"96e1d7548d0c88498585439b888f2828fd0b2a9fb63484fbb0f1e05bf06a289d\"" Nov 1 00:17:35.195749 kubelet[2591]: E1101 00:17:35.195707 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:35.197266 containerd[1465]: time="2025-11-01T00:17:35.197194265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:17:35.251344 kubelet[2591]: E1101 00:17:35.251295 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.251344 kubelet[2591]: W1101 00:17:35.251329 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.251344 kubelet[2591]: E1101 00:17:35.251361 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.251668 kubelet[2591]: E1101 00:17:35.251651 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.251668 kubelet[2591]: W1101 00:17:35.251664 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.251722 kubelet[2591]: E1101 00:17:35.251676 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.252015 kubelet[2591]: E1101 00:17:35.251988 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.252015 kubelet[2591]: W1101 00:17:35.252011 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.252068 kubelet[2591]: E1101 00:17:35.252024 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.252316 kubelet[2591]: E1101 00:17:35.252291 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.252316 kubelet[2591]: W1101 00:17:35.252305 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.252316 kubelet[2591]: E1101 00:17:35.252318 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.252642 kubelet[2591]: E1101 00:17:35.252622 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.252642 kubelet[2591]: W1101 00:17:35.252633 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.252642 kubelet[2591]: E1101 00:17:35.252643 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.252931 kubelet[2591]: E1101 00:17:35.252912 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.252931 kubelet[2591]: W1101 00:17:35.252925 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.253033 kubelet[2591]: E1101 00:17:35.252936 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.253227 kubelet[2591]: E1101 00:17:35.253209 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.253227 kubelet[2591]: W1101 00:17:35.253221 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.253303 kubelet[2591]: E1101 00:17:35.253231 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.253704 kubelet[2591]: E1101 00:17:35.253654 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.253704 kubelet[2591]: W1101 00:17:35.253693 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.253790 kubelet[2591]: E1101 00:17:35.253730 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.254188 kubelet[2591]: E1101 00:17:35.254167 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.254188 kubelet[2591]: W1101 00:17:35.254184 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.254274 kubelet[2591]: E1101 00:17:35.254198 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.254455 kubelet[2591]: E1101 00:17:35.254437 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.254455 kubelet[2591]: W1101 00:17:35.254450 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.254533 kubelet[2591]: E1101 00:17:35.254462 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.254748 kubelet[2591]: E1101 00:17:35.254732 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.254748 kubelet[2591]: W1101 00:17:35.254746 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.254804 kubelet[2591]: E1101 00:17:35.254757 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.255069 kubelet[2591]: E1101 00:17:35.255051 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.255069 kubelet[2591]: W1101 00:17:35.255066 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.256796 kubelet[2591]: E1101 00:17:35.255183 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.256796 kubelet[2591]: E1101 00:17:35.255849 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.256796 kubelet[2591]: W1101 00:17:35.255889 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.256796 kubelet[2591]: E1101 00:17:35.255914 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.259470 kubelet[2591]: E1101 00:17:35.259423 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.259470 kubelet[2591]: W1101 00:17:35.259458 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.259599 kubelet[2591]: E1101 00:17:35.259487 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.259949 kubelet[2591]: E1101 00:17:35.259913 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.259949 kubelet[2591]: W1101 00:17:35.259925 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.259949 kubelet[2591]: E1101 00:17:35.259935 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.260213 kubelet[2591]: E1101 00:17:35.260180 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.260213 kubelet[2591]: W1101 00:17:35.260196 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.260213 kubelet[2591]: E1101 00:17:35.260205 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.260478 kubelet[2591]: E1101 00:17:35.260458 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.260478 kubelet[2591]: W1101 00:17:35.260472 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.260545 kubelet[2591]: E1101 00:17:35.260482 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.260727 kubelet[2591]: E1101 00:17:35.260710 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.260727 kubelet[2591]: W1101 00:17:35.260721 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.260806 kubelet[2591]: E1101 00:17:35.260729 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.261091 kubelet[2591]: E1101 00:17:35.261061 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.261091 kubelet[2591]: W1101 00:17:35.261076 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.261091 kubelet[2591]: E1101 00:17:35.261086 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.261379 kubelet[2591]: E1101 00:17:35.261350 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.261379 kubelet[2591]: W1101 00:17:35.261366 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.261379 kubelet[2591]: E1101 00:17:35.261377 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.263293 kubelet[2591]: E1101 00:17:35.263259 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.263293 kubelet[2591]: W1101 00:17:35.263274 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.263293 kubelet[2591]: E1101 00:17:35.263285 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.263570 kubelet[2591]: E1101 00:17:35.263544 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.263570 kubelet[2591]: W1101 00:17:35.263557 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.263570 kubelet[2591]: E1101 00:17:35.263566 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.263925 kubelet[2591]: E1101 00:17:35.263887 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.263925 kubelet[2591]: W1101 00:17:35.263911 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.263925 kubelet[2591]: E1101 00:17:35.263935 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.264329 kubelet[2591]: E1101 00:17:35.264299 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.264366 kubelet[2591]: W1101 00:17:35.264328 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.264366 kubelet[2591]: E1101 00:17:35.264356 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.264793 kubelet[2591]: E1101 00:17:35.264775 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.264793 kubelet[2591]: W1101 00:17:35.264790 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.264927 kubelet[2591]: E1101 00:17:35.264804 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.275045 kubelet[2591]: E1101 00:17:35.274971 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:35.275045 kubelet[2591]: W1101 00:17:35.274998 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:35.275045 kubelet[2591]: E1101 00:17:35.275045 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:35.275985 kubelet[2591]: E1101 00:17:35.275947 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:35.276628 containerd[1465]: time="2025-11-01T00:17:35.276560649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hlvg,Uid:ba165251-79ae-4422-b21b-5d584ee72bde,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:35.320025 containerd[1465]: time="2025-11-01T00:17:35.319206390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:17:35.320025 containerd[1465]: time="2025-11-01T00:17:35.319934364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:17:35.320025 containerd[1465]: time="2025-11-01T00:17:35.319952119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:35.320287 containerd[1465]: time="2025-11-01T00:17:35.320100495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:17:35.341046 systemd[1]: Started cri-containerd-6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278.scope - libcontainer container 6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278. Nov 1 00:17:35.371466 containerd[1465]: time="2025-11-01T00:17:35.371404457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hlvg,Uid:ba165251-79ae-4422-b21b-5d584ee72bde,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\"" Nov 1 00:17:35.372448 kubelet[2591]: E1101 00:17:35.372396 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:37.283437 kubelet[2591]: E1101 00:17:37.283299 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:37.960830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664419879.mount: Deactivated successfully. Nov 1 00:17:38.677058 containerd[1465]: time="2025-11-01T00:17:38.677005991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:38.679647 containerd[1465]: time="2025-11-01T00:17:38.679586964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:17:38.681603 containerd[1465]: time="2025-11-01T00:17:38.681544636Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:38.688004 containerd[1465]: time="2025-11-01T00:17:38.687943567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:38.688684 containerd[1465]: time="2025-11-01T00:17:38.688641251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.491411358s" Nov 1 00:17:38.688684 containerd[1465]: time="2025-11-01T00:17:38.688677191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:17:38.689835 containerd[1465]: time="2025-11-01T00:17:38.689804163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:17:38.728145 containerd[1465]: time="2025-11-01T00:17:38.728082335Z" level=info msg="CreateContainer within sandbox \"96e1d7548d0c88498585439b888f2828fd0b2a9fb63484fbb0f1e05bf06a289d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:17:38.755170 containerd[1465]: time="2025-11-01T00:17:38.755089818Z" level=info msg="CreateContainer within sandbox \"96e1d7548d0c88498585439b888f2828fd0b2a9fb63484fbb0f1e05bf06a289d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04c3754500c6e9e3d2502a38e4b64c4dac98025920ecb2b248911f7b7c84d13c\"" Nov 1 00:17:38.755783 containerd[1465]: time="2025-11-01T00:17:38.755737215Z" level=info msg="StartContainer for \"04c3754500c6e9e3d2502a38e4b64c4dac98025920ecb2b248911f7b7c84d13c\"" Nov 1 00:17:38.791014 systemd[1]: Started cri-containerd-04c3754500c6e9e3d2502a38e4b64c4dac98025920ecb2b248911f7b7c84d13c.scope - libcontainer container 04c3754500c6e9e3d2502a38e4b64c4dac98025920ecb2b248911f7b7c84d13c. Nov 1 00:17:38.853771 containerd[1465]: time="2025-11-01T00:17:38.853713385Z" level=info msg="StartContainer for \"04c3754500c6e9e3d2502a38e4b64c4dac98025920ecb2b248911f7b7c84d13c\" returns successfully" Nov 1 00:17:39.291890 kubelet[2591]: E1101 00:17:39.291753 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:39.370457 kubelet[2591]: E1101 00:17:39.370408 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:39.378315 kubelet[2591]: E1101 00:17:39.378267 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.378315 kubelet[2591]: W1101 00:17:39.378307 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.383379 kubelet[2591]: E1101 00:17:39.383321 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.383761 kubelet[2591]: E1101 00:17:39.383728 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.383761 kubelet[2591]: W1101 00:17:39.383746 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.383831 kubelet[2591]: E1101 00:17:39.383775 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.384050 kubelet[2591]: E1101 00:17:39.384027 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.384050 kubelet[2591]: W1101 00:17:39.384042 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.384108 kubelet[2591]: E1101 00:17:39.384053 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.384396 kubelet[2591]: E1101 00:17:39.384352 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.384396 kubelet[2591]: W1101 00:17:39.384367 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.384396 kubelet[2591]: E1101 00:17:39.384386 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.385629 kubelet[2591]: E1101 00:17:39.385609 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.385629 kubelet[2591]: W1101 00:17:39.385624 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.385748 kubelet[2591]: E1101 00:17:39.385638 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.385942 kubelet[2591]: E1101 00:17:39.385925 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.385942 kubelet[2591]: W1101 00:17:39.385939 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.386072 kubelet[2591]: E1101 00:17:39.385951 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.386250 kubelet[2591]: E1101 00:17:39.386204 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.386250 kubelet[2591]: W1101 00:17:39.386221 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.386250 kubelet[2591]: E1101 00:17:39.386233 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.386527 kubelet[2591]: E1101 00:17:39.386456 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.386527 kubelet[2591]: W1101 00:17:39.386465 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.386527 kubelet[2591]: E1101 00:17:39.386473 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.386680 kubelet[2591]: E1101 00:17:39.386658 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.386680 kubelet[2591]: W1101 00:17:39.386672 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.386762 kubelet[2591]: E1101 00:17:39.386683 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.386916 kubelet[2591]: E1101 00:17:39.386894 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.386916 kubelet[2591]: W1101 00:17:39.386908 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.387008 kubelet[2591]: E1101 00:17:39.386919 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.387166 kubelet[2591]: E1101 00:17:39.387137 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.387166 kubelet[2591]: W1101 00:17:39.387151 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.387166 kubelet[2591]: E1101 00:17:39.387161 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.387354 kubelet[2591]: E1101 00:17:39.387336 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.387385 kubelet[2591]: W1101 00:17:39.387370 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.387434 kubelet[2591]: E1101 00:17:39.387383 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.387643 kubelet[2591]: E1101 00:17:39.387609 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.387643 kubelet[2591]: W1101 00:17:39.387623 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.387643 kubelet[2591]: E1101 00:17:39.387634 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.387959 kubelet[2591]: E1101 00:17:39.387928 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.387959 kubelet[2591]: W1101 00:17:39.387951 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.388050 kubelet[2591]: E1101 00:17:39.387971 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.389420 kubelet[2591]: E1101 00:17:39.388567 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.389420 kubelet[2591]: W1101 00:17:39.388589 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.389420 kubelet[2591]: E1101 00:17:39.388608 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.401669 kubelet[2591]: I1101 00:17:39.401586 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8494d8f865-7qppr" podStartSLOduration=1.908667947 podStartE2EDuration="5.401563827s" podCreationTimestamp="2025-11-01 00:17:34 +0000 UTC" firstStartedPulling="2025-11-01 00:17:35.196719199 +0000 UTC m=+20.115715933" lastFinishedPulling="2025-11-01 00:17:38.689615079 +0000 UTC m=+23.608611813" observedRunningTime="2025-11-01 00:17:39.397883007 +0000 UTC m=+24.316879751" watchObservedRunningTime="2025-11-01 00:17:39.401563827 +0000 UTC m=+24.320560551" Nov 1 00:17:39.485579 kubelet[2591]: E1101 00:17:39.485524 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.485579 kubelet[2591]: W1101 00:17:39.485565 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.485778 kubelet[2591]: E1101 00:17:39.485602 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.486207 kubelet[2591]: E1101 00:17:39.486167 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.486207 kubelet[2591]: W1101 00:17:39.486197 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.486293 kubelet[2591]: E1101 00:17:39.486227 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.486563 kubelet[2591]: E1101 00:17:39.486541 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.486563 kubelet[2591]: W1101 00:17:39.486560 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.486635 kubelet[2591]: E1101 00:17:39.486573 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.486999 kubelet[2591]: E1101 00:17:39.486977 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.486999 kubelet[2591]: W1101 00:17:39.486995 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.487084 kubelet[2591]: E1101 00:17:39.487006 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.487434 kubelet[2591]: E1101 00:17:39.487395 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.487434 kubelet[2591]: W1101 00:17:39.487420 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.487434 kubelet[2591]: E1101 00:17:39.487437 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.487957 kubelet[2591]: E1101 00:17:39.487925 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.488030 kubelet[2591]: W1101 00:17:39.487957 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.488030 kubelet[2591]: E1101 00:17:39.487988 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.488252 kubelet[2591]: E1101 00:17:39.488234 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.488252 kubelet[2591]: W1101 00:17:39.488247 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.488315 kubelet[2591]: E1101 00:17:39.488256 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.488586 kubelet[2591]: E1101 00:17:39.488562 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.488586 kubelet[2591]: W1101 00:17:39.488574 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.488586 kubelet[2591]: E1101 00:17:39.488585 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.488887 kubelet[2591]: E1101 00:17:39.488867 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.488887 kubelet[2591]: W1101 00:17:39.488883 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.488963 kubelet[2591]: E1101 00:17:39.488894 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.489376 kubelet[2591]: E1101 00:17:39.489348 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.489376 kubelet[2591]: W1101 00:17:39.489366 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.489436 kubelet[2591]: E1101 00:17:39.489379 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.489679 kubelet[2591]: E1101 00:17:39.489637 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.489679 kubelet[2591]: W1101 00:17:39.489653 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.489679 kubelet[2591]: E1101 00:17:39.489665 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.490045 kubelet[2591]: E1101 00:17:39.490019 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.490045 kubelet[2591]: W1101 00:17:39.490037 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.490106 kubelet[2591]: E1101 00:17:39.490049 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.491311 kubelet[2591]: E1101 00:17:39.491287 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.491365 kubelet[2591]: W1101 00:17:39.491314 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.491365 kubelet[2591]: E1101 00:17:39.491338 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.491905 kubelet[2591]: E1101 00:17:39.491845 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.491905 kubelet[2591]: W1101 00:17:39.491877 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.491905 kubelet[2591]: E1101 00:17:39.491888 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.492163 kubelet[2591]: E1101 00:17:39.492110 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.492189 kubelet[2591]: W1101 00:17:39.492162 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.492189 kubelet[2591]: E1101 00:17:39.492176 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.492497 kubelet[2591]: E1101 00:17:39.492479 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.492497 kubelet[2591]: W1101 00:17:39.492493 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.492577 kubelet[2591]: E1101 00:17:39.492505 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.492948 kubelet[2591]: E1101 00:17:39.492920 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.492948 kubelet[2591]: W1101 00:17:39.492942 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.493052 kubelet[2591]: E1101 00:17:39.492958 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.493302 kubelet[2591]: E1101 00:17:39.493282 2591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:17:39.493302 kubelet[2591]: W1101 00:17:39.493296 2591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:17:39.493379 kubelet[2591]: E1101 00:17:39.493310 2591 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:17:39.970850 containerd[1465]: time="2025-11-01T00:17:39.970037234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:39.971839 containerd[1465]: time="2025-11-01T00:17:39.971723242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:17:39.973056 containerd[1465]: time="2025-11-01T00:17:39.973018938Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:39.976095 containerd[1465]: time="2025-11-01T00:17:39.976046891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:39.976834 containerd[1465]: time="2025-11-01T00:17:39.976774973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.286935783s" Nov 1 00:17:39.976834 containerd[1465]: time="2025-11-01T00:17:39.976827194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:17:39.995717 containerd[1465]: time="2025-11-01T00:17:39.995636108Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:17:40.020167 containerd[1465]: time="2025-11-01T00:17:40.020085979Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c\"" Nov 1 00:17:40.020542 containerd[1465]: time="2025-11-01T00:17:40.020511077Z" level=info msg="StartContainer for \"93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c\"" Nov 1 00:17:40.062053 systemd[1]: Started cri-containerd-93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c.scope - libcontainer container 93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c. Nov 1 00:17:40.134154 systemd[1]: cri-containerd-93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c.scope: Deactivated successfully. Nov 1 00:17:40.382388 kubelet[2591]: E1101 00:17:40.382259 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:40.422016 containerd[1465]: time="2025-11-01T00:17:40.421953947Z" level=info msg="StartContainer for \"93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c\" returns successfully" Nov 1 00:17:40.428219 kubelet[2591]: E1101 00:17:40.428074 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:40.454515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c-rootfs.mount: Deactivated successfully. Nov 1 00:17:40.469906 containerd[1465]: time="2025-11-01T00:17:40.469784273Z" level=info msg="shim disconnected" id=93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c namespace=k8s.io Nov 1 00:17:40.469906 containerd[1465]: time="2025-11-01T00:17:40.469893914Z" level=warning msg="cleaning up after shim disconnected" id=93312988f525a0ecb168d0a94d388d47263415a5020b1bac64ebc4dd3777ba7c namespace=k8s.io Nov 1 00:17:40.469906 containerd[1465]: time="2025-11-01T00:17:40.469911348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:17:41.431366 kubelet[2591]: E1101 00:17:41.431334 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:41.431899 kubelet[2591]: E1101 00:17:41.431482 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:41.432354 containerd[1465]: time="2025-11-01T00:17:41.432275850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:17:42.282554 kubelet[2591]: E1101 00:17:42.282494 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:44.282596 kubelet[2591]: E1101 00:17:44.282501 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:46.049465 containerd[1465]: time="2025-11-01T00:17:46.049342642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:46.056263 containerd[1465]: time="2025-11-01T00:17:46.056155993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:17:46.059808 containerd[1465]: time="2025-11-01T00:17:46.059737444Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:46.064303 containerd[1465]: time="2025-11-01T00:17:46.064197584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:46.065411 containerd[1465]: time="2025-11-01T00:17:46.065233191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.632913918s" Nov 1 00:17:46.065411 containerd[1465]: time="2025-11-01T00:17:46.065263917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:17:46.087970 containerd[1465]: time="2025-11-01T00:17:46.087828772Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:17:46.281954 kubelet[2591]: E1101 00:17:46.281887 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:46.430774 containerd[1465]: time="2025-11-01T00:17:46.430597074Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215\"" Nov 1 00:17:46.432889 containerd[1465]: time="2025-11-01T00:17:46.431613695Z" level=info msg="StartContainer for \"25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215\"" Nov 1 00:17:46.472123 systemd[1]: Started cri-containerd-25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215.scope - libcontainer container 25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215. Nov 1 00:17:46.512689 containerd[1465]: time="2025-11-01T00:17:46.512625904Z" level=info msg="StartContainer for \"25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215\" returns successfully" Nov 1 00:17:47.449711 kubelet[2591]: E1101 00:17:47.449664 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:48.156991 systemd[1]: cri-containerd-25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215.scope: Deactivated successfully. Nov 1 00:17:48.180508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215-rootfs.mount: Deactivated successfully. Nov 1 00:17:48.185922 containerd[1465]: time="2025-11-01T00:17:48.185840009Z" level=info msg="shim disconnected" id=25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215 namespace=k8s.io Nov 1 00:17:48.185922 containerd[1465]: time="2025-11-01T00:17:48.185920037Z" level=warning msg="cleaning up after shim disconnected" id=25642c8faa5c98fedffcdc15977a502b6748315f6d8cbac0279d1cced5dd4215 namespace=k8s.io Nov 1 00:17:48.186462 containerd[1465]: time="2025-11-01T00:17:48.185930306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:17:48.245603 kubelet[2591]: I1101 00:17:48.245555 2591 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:17:48.285459 systemd[1]: Created slice kubepods-burstable-podd94b4da9_d4a7_4f92_8ec6_90e45ff748b8.slice - libcontainer container kubepods-burstable-podd94b4da9_d4a7_4f92_8ec6_90e45ff748b8.slice. Nov 1 00:17:48.293020 systemd[1]: Created slice kubepods-burstable-pod6cfa1e13_d1c3_4a18_ab06_7a4f7444edb4.slice - libcontainer container kubepods-burstable-pod6cfa1e13_d1c3_4a18_ab06_7a4f7444edb4.slice. Nov 1 00:17:48.305051 systemd[1]: Created slice kubepods-besteffort-pod6970f73b_f9db_4e4e_ace1_ad25d9704f47.slice - libcontainer container kubepods-besteffort-pod6970f73b_f9db_4e4e_ace1_ad25d9704f47.slice. Nov 1 00:17:48.312119 systemd[1]: Created slice kubepods-besteffort-podab5a5667_f558_4d28_9b68_0d3dbc43d636.slice - libcontainer container kubepods-besteffort-podab5a5667_f558_4d28_9b68_0d3dbc43d636.slice. Nov 1 00:17:48.317649 systemd[1]: Created slice kubepods-besteffort-podd3a2d948_d842_45c9_8a49_ba664ed2926c.slice - libcontainer container kubepods-besteffort-podd3a2d948_d842_45c9_8a49_ba664ed2926c.slice. Nov 1 00:17:48.323103 systemd[1]: Created slice kubepods-besteffort-podd3f82561_0214_49cb_b635_63c7018b0ce5.slice - libcontainer container kubepods-besteffort-podd3f82561_0214_49cb_b635_63c7018b0ce5.slice. Nov 1 00:17:48.347828 systemd[1]: Created slice kubepods-besteffort-podcddeab39_52b2_4e4d_8121_8c667fc57977.slice - libcontainer container kubepods-besteffort-podcddeab39_52b2_4e4d_8121_8c667fc57977.slice. Nov 1 00:17:48.352073 containerd[1465]: time="2025-11-01T00:17:48.352032623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v865s,Uid:cddeab39-52b2-4e4d-8121-8c667fc57977,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:48.353293 systemd[1]: Created slice kubepods-besteffort-pod80acf411_c481_42bf_9e90_d393893e1d60.slice - libcontainer container kubepods-besteffort-pod80acf411_c481_42bf_9e90_d393893e1d60.slice. Nov 1 00:17:48.356207 kubelet[2591]: I1101 00:17:48.356169 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwn9\" (UniqueName: \"kubernetes.io/projected/6970f73b-f9db-4e4e-ace1-ad25d9704f47-kube-api-access-pnwn9\") pod \"calico-kube-controllers-6fdc77bbd4-cflc4\" (UID: \"6970f73b-f9db-4e4e-ace1-ad25d9704f47\") " pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" Nov 1 00:17:48.356282 kubelet[2591]: I1101 00:17:48.356213 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80acf411-c481-42bf-9e90-d393893e1d60-whisker-ca-bundle\") pod \"whisker-99f944c66-9zxbf\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " pod="calico-system/whisker-99f944c66-9zxbf" Nov 1 00:17:48.356282 kubelet[2591]: I1101 00:17:48.356253 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d94b4da9-d4a7-4f92-8ec6-90e45ff748b8-config-volume\") pod \"coredns-674b8bbfcf-b6lhx\" (UID: \"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8\") " pod="kube-system/coredns-674b8bbfcf-b6lhx" Nov 1 00:17:48.356282 kubelet[2591]: I1101 00:17:48.356277 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab5a5667-f558-4d28-9b68-0d3dbc43d636-goldmane-key-pair\") pod \"goldmane-666569f655-ltj5c\" (UID: \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\") " pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:48.356354 kubelet[2591]: I1101 00:17:48.356302 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80acf411-c481-42bf-9e90-d393893e1d60-whisker-backend-key-pair\") pod \"whisker-99f944c66-9zxbf\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " pod="calico-system/whisker-99f944c66-9zxbf" Nov 1 00:17:48.356354 kubelet[2591]: I1101 00:17:48.356324 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5a5667-f558-4d28-9b68-0d3dbc43d636-goldmane-ca-bundle\") pod \"goldmane-666569f655-ltj5c\" (UID: \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\") " pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:48.356354 kubelet[2591]: I1101 00:17:48.356346 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zg4\" (UniqueName: \"kubernetes.io/projected/d3a2d948-d842-45c9-8a49-ba664ed2926c-kube-api-access-c2zg4\") pod \"calico-apiserver-84989fcb96-gd9wk\" (UID: \"d3a2d948-d842-45c9-8a49-ba664ed2926c\") " pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" Nov 1 00:17:48.356434 kubelet[2591]: I1101 00:17:48.356370 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4-config-volume\") pod \"coredns-674b8bbfcf-jqnhp\" (UID: \"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4\") " pod="kube-system/coredns-674b8bbfcf-jqnhp" Nov 1 00:17:48.356434 kubelet[2591]: I1101 00:17:48.356390 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3a2d948-d842-45c9-8a49-ba664ed2926c-calico-apiserver-certs\") pod \"calico-apiserver-84989fcb96-gd9wk\" (UID: \"d3a2d948-d842-45c9-8a49-ba664ed2926c\") " pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" Nov 1 00:17:48.356482 kubelet[2591]: I1101 00:17:48.356433 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9tm\" (UniqueName: \"kubernetes.io/projected/80acf411-c481-42bf-9e90-d393893e1d60-kube-api-access-kh9tm\") pod \"whisker-99f944c66-9zxbf\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " pod="calico-system/whisker-99f944c66-9zxbf" Nov 1 00:17:48.356482 kubelet[2591]: I1101 00:17:48.356455 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3f82561-0214-49cb-b635-63c7018b0ce5-calico-apiserver-certs\") pod \"calico-apiserver-84989fcb96-gtbgf\" (UID: \"d3f82561-0214-49cb-b635-63c7018b0ce5\") " pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" Nov 1 00:17:48.356529 kubelet[2591]: I1101 00:17:48.356498 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6970f73b-f9db-4e4e-ace1-ad25d9704f47-tigera-ca-bundle\") pod \"calico-kube-controllers-6fdc77bbd4-cflc4\" (UID: \"6970f73b-f9db-4e4e-ace1-ad25d9704f47\") " pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" Nov 1 00:17:48.356555 kubelet[2591]: I1101 00:17:48.356546 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ns7\" (UniqueName: \"kubernetes.io/projected/ab5a5667-f558-4d28-9b68-0d3dbc43d636-kube-api-access-c4ns7\") pod \"goldmane-666569f655-ltj5c\" (UID: \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\") " pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:48.356581 kubelet[2591]: I1101 00:17:48.356566 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpr9w\" (UniqueName: \"kubernetes.io/projected/6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4-kube-api-access-gpr9w\") pod \"coredns-674b8bbfcf-jqnhp\" (UID: \"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4\") " pod="kube-system/coredns-674b8bbfcf-jqnhp" Nov 1 00:17:48.356607 kubelet[2591]: I1101 00:17:48.356589 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkxw\" (UniqueName: \"kubernetes.io/projected/d94b4da9-d4a7-4f92-8ec6-90e45ff748b8-kube-api-access-nhkxw\") pod \"coredns-674b8bbfcf-b6lhx\" (UID: \"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8\") " pod="kube-system/coredns-674b8bbfcf-b6lhx" Nov 1 00:17:48.356634 kubelet[2591]: I1101 00:17:48.356610 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab5a5667-f558-4d28-9b68-0d3dbc43d636-config\") pod \"goldmane-666569f655-ltj5c\" (UID: \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\") " pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:48.356661 kubelet[2591]: I1101 00:17:48.356638 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96ns7\" (UniqueName: \"kubernetes.io/projected/d3f82561-0214-49cb-b635-63c7018b0ce5-kube-api-access-96ns7\") pod \"calico-apiserver-84989fcb96-gtbgf\" (UID: \"d3f82561-0214-49cb-b635-63c7018b0ce5\") " pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" Nov 1 00:17:48.453355 kubelet[2591]: E1101 00:17:48.453205 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:48.453970 containerd[1465]: time="2025-11-01T00:17:48.453933865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:17:49.012358 kubelet[2591]: E1101 00:17:49.012286 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:49.013226 kubelet[2591]: E1101 00:17:49.012770 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:17:49.013486 containerd[1465]: time="2025-11-01T00:17:49.013428966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqnhp,Uid:6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:49.013727 containerd[1465]: time="2025-11-01T00:17:49.013697271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6lhx,Uid:d94b4da9-d4a7-4f92-8ec6-90e45ff748b8,Namespace:kube-system,Attempt:0,}" Nov 1 00:17:49.015803 containerd[1465]: time="2025-11-01T00:17:49.014800809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdc77bbd4-cflc4,Uid:6970f73b-f9db-4e4e-ace1-ad25d9704f47,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:49.015803 containerd[1465]: time="2025-11-01T00:17:49.015297843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ltj5c,Uid:ab5a5667-f558-4d28-9b68-0d3dbc43d636,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:49.015803 containerd[1465]: time="2025-11-01T00:17:49.015507418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gtbgf,Uid:d3f82561-0214-49cb-b635-63c7018b0ce5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:17:49.016032 containerd[1465]: time="2025-11-01T00:17:49.015513079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gd9wk,Uid:d3a2d948-d842-45c9-8a49-ba664ed2926c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:17:49.016131 containerd[1465]: time="2025-11-01T00:17:49.016097765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-99f944c66-9zxbf,Uid:80acf411-c481-42bf-9e90-d393893e1d60,Namespace:calico-system,Attempt:0,}" Nov 1 00:17:49.018739 containerd[1465]: time="2025-11-01T00:17:49.018659054Z" level=error msg="Failed to destroy network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.023421 containerd[1465]: time="2025-11-01T00:17:49.023376518Z" level=error msg="encountered an error cleaning up failed sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.023481 containerd[1465]: time="2025-11-01T00:17:49.023440926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v865s,Uid:cddeab39-52b2-4e4d-8121-8c667fc57977,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.023800 kubelet[2591]: E1101 00:17:49.023740 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.023886 kubelet[2591]: E1101 00:17:49.023838 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:49.023934 kubelet[2591]: E1101 00:17:49.023893 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v865s" Nov 1 00:17:49.024014 kubelet[2591]: E1101 00:17:49.023973 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:49.457253 kubelet[2591]: I1101 00:17:49.457025 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:17:49.478174 containerd[1465]: time="2025-11-01T00:17:49.478100930Z" level=info msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\"" Nov 1 00:17:49.482565 containerd[1465]: time="2025-11-01T00:17:49.482354491Z" level=info msg="Ensure that sandbox e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f in task-service has been cleanup successfully" Nov 1 00:17:49.570526 containerd[1465]: time="2025-11-01T00:17:49.570432084Z" level=error msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\" failed" error="failed to destroy network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.570879 kubelet[2591]: E1101 00:17:49.570736 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:17:49.571068 kubelet[2591]: E1101 00:17:49.570813 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f"} Nov 1 00:17:49.571068 kubelet[2591]: E1101 00:17:49.570937 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cddeab39-52b2-4e4d-8121-8c667fc57977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:49.571068 kubelet[2591]: E1101 00:17:49.570969 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cddeab39-52b2-4e4d-8121-8c667fc57977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:17:49.589433 containerd[1465]: time="2025-11-01T00:17:49.589356652Z" level=error msg="Failed to destroy network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.590673 containerd[1465]: time="2025-11-01T00:17:49.589959971Z" level=error msg="encountered an error cleaning up failed sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.590673 containerd[1465]: time="2025-11-01T00:17:49.590028748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gtbgf,Uid:d3f82561-0214-49cb-b635-63c7018b0ce5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.590842 kubelet[2591]: E1101 00:17:49.590410 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.590842 kubelet[2591]: E1101 00:17:49.590512 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" Nov 1 00:17:49.590842 kubelet[2591]: E1101 00:17:49.590543 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" Nov 1 00:17:49.591778 kubelet[2591]: E1101 00:17:49.590625 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84989fcb96-gtbgf_calico-apiserver(d3f82561-0214-49cb-b635-63c7018b0ce5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84989fcb96-gtbgf_calico-apiserver(d3f82561-0214-49cb-b635-63c7018b0ce5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:17:49.592732 containerd[1465]: time="2025-11-01T00:17:49.592683289Z" level=error msg="Failed to destroy network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.593309 containerd[1465]: time="2025-11-01T00:17:49.593268615Z" level=error msg="encountered an error cleaning up failed sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.593372 containerd[1465]: time="2025-11-01T00:17:49.593332963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6lhx,Uid:d94b4da9-d4a7-4f92-8ec6-90e45ff748b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.595760 kubelet[2591]: E1101 00:17:49.595699 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.595981 kubelet[2591]: E1101 00:17:49.595792 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b6lhx" Nov 1 00:17:49.595981 kubelet[2591]: E1101 00:17:49.595823 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b6lhx" Nov 1 00:17:49.596114 kubelet[2591]: E1101 00:17:49.595944 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b6lhx_kube-system(d94b4da9-d4a7-4f92-8ec6-90e45ff748b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b6lhx_kube-system(d94b4da9-d4a7-4f92-8ec6-90e45ff748b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b6lhx" podUID="d94b4da9-d4a7-4f92-8ec6-90e45ff748b8" Nov 1 00:17:49.596357 containerd[1465]: time="2025-11-01T00:17:49.596315216Z" level=error msg="Failed to destroy network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.596835 containerd[1465]: time="2025-11-01T00:17:49.596793446Z" level=error msg="encountered an error cleaning up failed sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.596960 containerd[1465]: time="2025-11-01T00:17:49.596849930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ltj5c,Uid:ab5a5667-f558-4d28-9b68-0d3dbc43d636,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.597181 kubelet[2591]: E1101 00:17:49.597049 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.597181 kubelet[2591]: E1101 00:17:49.597101 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:49.597181 kubelet[2591]: E1101 00:17:49.597124 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ltj5c" Nov 1 00:17:49.597345 kubelet[2591]: E1101 00:17:49.597172 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ltj5c_calico-system(ab5a5667-f558-4d28-9b68-0d3dbc43d636)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ltj5c_calico-system(ab5a5667-f558-4d28-9b68-0d3dbc43d636)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:17:49.602219 containerd[1465]: time="2025-11-01T00:17:49.601998457Z" level=error msg="Failed to destroy network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.602588 containerd[1465]: time="2025-11-01T00:17:49.602538720Z" level=error msg="encountered an error cleaning up failed sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.602639 containerd[1465]: time="2025-11-01T00:17:49.602605463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdc77bbd4-cflc4,Uid:6970f73b-f9db-4e4e-ace1-ad25d9704f47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.603173 kubelet[2591]: E1101 00:17:49.602931 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.603173 kubelet[2591]: E1101 00:17:49.603009 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" Nov 1 00:17:49.603173 kubelet[2591]: E1101 00:17:49.603035 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" Nov 1 00:17:49.603345 kubelet[2591]: E1101 00:17:49.603110 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fdc77bbd4-cflc4_calico-system(6970f73b-f9db-4e4e-ace1-ad25d9704f47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fdc77bbd4-cflc4_calico-system(6970f73b-f9db-4e4e-ace1-ad25d9704f47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:17:49.622014 containerd[1465]: time="2025-11-01T00:17:49.621915238Z" level=error msg="Failed to destroy network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.622631 containerd[1465]: time="2025-11-01T00:17:49.622578648Z" level=error msg="encountered an error cleaning up failed sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.622807 containerd[1465]: time="2025-11-01T00:17:49.622774348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqnhp,Uid:6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.623495 kubelet[2591]: E1101 00:17:49.623435 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.623587 kubelet[2591]: E1101 00:17:49.623530 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqnhp" Nov 1 00:17:49.623587 kubelet[2591]: E1101 00:17:49.623567 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqnhp" Nov 1 00:17:49.623736 kubelet[2591]: E1101 00:17:49.623682 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqnhp_kube-system(6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqnhp_kube-system(6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqnhp" podUID="6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4" Nov 1 00:17:49.626190 containerd[1465]: time="2025-11-01T00:17:49.626143924Z" level=error msg="Failed to destroy network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.626803 containerd[1465]: time="2025-11-01T00:17:49.626771478Z" level=error msg="encountered an error cleaning up failed sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.627099 containerd[1465]: time="2025-11-01T00:17:49.626984680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-99f944c66-9zxbf,Uid:80acf411-c481-42bf-9e90-d393893e1d60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.627223 kubelet[2591]: E1101 00:17:49.627183 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.627274 kubelet[2591]: E1101 00:17:49.627245 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-99f944c66-9zxbf" Nov 1 00:17:49.627311 kubelet[2591]: E1101 00:17:49.627272 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-99f944c66-9zxbf" Nov 1 00:17:49.627349 kubelet[2591]: E1101 00:17:49.627323 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-99f944c66-9zxbf_calico-system(80acf411-c481-42bf-9e90-d393893e1d60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-99f944c66-9zxbf_calico-system(80acf411-c481-42bf-9e90-d393893e1d60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-99f944c66-9zxbf" podUID="80acf411-c481-42bf-9e90-d393893e1d60" Nov 1 00:17:49.628125 containerd[1465]: time="2025-11-01T00:17:49.628090033Z" level=error msg="Failed to destroy network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.628493 containerd[1465]: time="2025-11-01T00:17:49.628452079Z" level=error msg="encountered an error cleaning up failed sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.628554 containerd[1465]: time="2025-11-01T00:17:49.628492773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gd9wk,Uid:d3a2d948-d842-45c9-8a49-ba664ed2926c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.628676 kubelet[2591]: E1101 00:17:49.628647 2591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:49.628723 kubelet[2591]: E1101 00:17:49.628684 2591 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" Nov 1 00:17:49.628723 kubelet[2591]: E1101 00:17:49.628704 2591 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" Nov 1 00:17:49.628798 kubelet[2591]: E1101 00:17:49.628746 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84989fcb96-gd9wk_calico-apiserver(d3a2d948-d842-45c9-8a49-ba664ed2926c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84989fcb96-gd9wk_calico-apiserver(d3a2d948-d842-45c9-8a49-ba664ed2926c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:17:50.182111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c-shm.mount: Deactivated successfully. Nov 1 00:17:50.182229 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7-shm.mount: Deactivated successfully. Nov 1 00:17:50.182309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6-shm.mount: Deactivated successfully. Nov 1 00:17:50.182388 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c-shm.mount: Deactivated successfully. Nov 1 00:17:50.460621 kubelet[2591]: I1101 00:17:50.460422 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:17:50.461434 kubelet[2591]: I1101 00:17:50.461322 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:17:50.462613 containerd[1465]: time="2025-11-01T00:17:50.461244159Z" level=info msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" Nov 1 00:17:50.462613 containerd[1465]: time="2025-11-01T00:17:50.461458975Z" level=info msg="Ensure that sandbox dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6 in task-service has been cleanup successfully" Nov 1 00:17:50.462613 containerd[1465]: time="2025-11-01T00:17:50.461719976Z" level=info msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" Nov 1 00:17:50.462613 containerd[1465]: time="2025-11-01T00:17:50.461894778Z" level=info msg="Ensure that sandbox f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b in task-service has been cleanup successfully" Nov 1 00:17:50.491840 kubelet[2591]: I1101 00:17:50.491776 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:17:50.492776 containerd[1465]: time="2025-11-01T00:17:50.492734110Z" level=info msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" Nov 1 00:17:50.493235 containerd[1465]: time="2025-11-01T00:17:50.492935952Z" level=info msg="Ensure that sandbox 16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74 in task-service has been cleanup successfully" Nov 1 00:17:50.494207 kubelet[2591]: I1101 00:17:50.494176 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:17:50.495088 containerd[1465]: time="2025-11-01T00:17:50.495053440Z" level=info msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" Nov 1 00:17:50.495816 containerd[1465]: time="2025-11-01T00:17:50.495776391Z" level=info msg="Ensure that sandbox 242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7 in task-service has been cleanup successfully" Nov 1 00:17:50.502517 kubelet[2591]: I1101 00:17:50.502473 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:17:50.504456 containerd[1465]: time="2025-11-01T00:17:50.504391931Z" level=info msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" Nov 1 00:17:50.505522 containerd[1465]: time="2025-11-01T00:17:50.505486587Z" level=info msg="Ensure that sandbox c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375 in task-service has been cleanup successfully" Nov 1 00:17:50.518837 containerd[1465]: time="2025-11-01T00:17:50.518749031Z" level=error msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" failed" error="failed to destroy network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.519110 kubelet[2591]: E1101 00:17:50.519065 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:17:50.519226 kubelet[2591]: E1101 00:17:50.519121 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6"} Nov 1 00:17:50.519226 kubelet[2591]: E1101 00:17:50.519165 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.519226 kubelet[2591]: E1101 00:17:50.519190 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b6lhx" podUID="d94b4da9-d4a7-4f92-8ec6-90e45ff748b8" Nov 1 00:17:50.521016 containerd[1465]: time="2025-11-01T00:17:50.520967224Z" level=error msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" failed" error="failed to destroy network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.521157 kubelet[2591]: E1101 00:17:50.521118 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:17:50.521157 kubelet[2591]: E1101 00:17:50.521154 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b"} Nov 1 00:17:50.521251 kubelet[2591]: E1101 00:17:50.521178 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.521251 kubelet[2591]: E1101 00:17:50.521198 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqnhp" podUID="6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4" Nov 1 00:17:50.521368 kubelet[2591]: I1101 00:17:50.521293 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:17:50.522615 containerd[1465]: time="2025-11-01T00:17:50.521828941Z" level=info msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" Nov 1 00:17:50.522615 containerd[1465]: time="2025-11-01T00:17:50.522135375Z" level=info msg="Ensure that sandbox b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c in task-service has been cleanup successfully" Nov 1 00:17:50.547727 kubelet[2591]: I1101 00:17:50.546780 2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:17:50.547880 containerd[1465]: time="2025-11-01T00:17:50.547492506Z" level=info msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" Nov 1 00:17:50.555910 containerd[1465]: time="2025-11-01T00:17:50.555829113Z" level=info msg="Ensure that sandbox f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c in task-service has been cleanup successfully" Nov 1 00:17:50.563055 containerd[1465]: time="2025-11-01T00:17:50.563001186Z" level=error msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" failed" error="failed to destroy network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.563542 kubelet[2591]: E1101 00:17:50.563505 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:17:50.563679 kubelet[2591]: E1101 00:17:50.563657 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375"} Nov 1 00:17:50.563785 kubelet[2591]: E1101 00:17:50.563769 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3f82561-0214-49cb-b635-63c7018b0ce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.564007 kubelet[2591]: E1101 00:17:50.563949 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3f82561-0214-49cb-b635-63c7018b0ce5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:17:50.577783 containerd[1465]: time="2025-11-01T00:17:50.577709613Z" level=error msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" failed" error="failed to destroy network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.578412 kubelet[2591]: E1101 00:17:50.578346 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:17:50.578622 kubelet[2591]: E1101 00:17:50.578599 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74"} Nov 1 00:17:50.578745 kubelet[2591]: E1101 00:17:50.578688 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80acf411-c481-42bf-9e90-d393893e1d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.578745 kubelet[2591]: E1101 00:17:50.578714 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80acf411-c481-42bf-9e90-d393893e1d60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-99f944c66-9zxbf" podUID="80acf411-c481-42bf-9e90-d393893e1d60" Nov 1 00:17:50.586111 containerd[1465]: time="2025-11-01T00:17:50.586060255Z" level=error msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" failed" error="failed to destroy network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.586518 kubelet[2591]: E1101 00:17:50.586485 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:17:50.586625 kubelet[2591]: E1101 00:17:50.586607 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7"} Nov 1 00:17:50.586753 kubelet[2591]: E1101 00:17:50.586693 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d3a2d948-d842-45c9-8a49-ba664ed2926c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.586753 kubelet[2591]: E1101 00:17:50.586723 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d3a2d948-d842-45c9-8a49-ba664ed2926c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:17:50.591148 containerd[1465]: time="2025-11-01T00:17:50.591070590Z" level=error msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" failed" error="failed to destroy network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.591412 kubelet[2591]: E1101 00:17:50.591367 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:17:50.591468 kubelet[2591]: E1101 00:17:50.591431 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c"} Nov 1 00:17:50.591537 kubelet[2591]: E1101 00:17:50.591495 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.591656 kubelet[2591]: E1101 00:17:50.591531 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab5a5667-f558-4d28-9b68-0d3dbc43d636\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:17:50.596175 containerd[1465]: time="2025-11-01T00:17:50.596134442Z" level=error msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" failed" error="failed to destroy network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:17:50.596427 kubelet[2591]: E1101 00:17:50.596395 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:17:50.596494 kubelet[2591]: E1101 00:17:50.596432 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c"} Nov 1 00:17:50.596494 kubelet[2591]: E1101 00:17:50.596463 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6970f73b-f9db-4e4e-ace1-ad25d9704f47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:17:50.596575 kubelet[2591]: E1101 00:17:50.596490 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6970f73b-f9db-4e4e-ace1-ad25d9704f47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:17:53.205125 systemd[1]: Started sshd@9-10.0.0.38:22-10.0.0.1:44160.service - OpenSSH per-connection server daemon (10.0.0.1:44160). Nov 1 00:17:53.242369 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 44160 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:17:53.244653 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:17:53.250678 systemd-logind[1452]: New session 10 of user core. Nov 1 00:17:53.257057 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:17:53.404938 sshd[3813]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:53.409304 systemd[1]: sshd@9-10.0.0.38:22-10.0.0.1:44160.service: Deactivated successfully. Nov 1 00:17:53.411850 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:17:53.412601 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:17:53.413776 systemd-logind[1452]: Removed session 10. Nov 1 00:17:56.073025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446550100.mount: Deactivated successfully. Nov 1 00:17:58.421206 systemd[1]: Started sshd@10-10.0.0.38:22-10.0.0.1:45640.service - OpenSSH per-connection server daemon (10.0.0.1:45640). Nov 1 00:17:59.511115 sshd[3833]: Accepted publickey for core from 10.0.0.1 port 45640 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:17:59.513785 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:17:59.520471 systemd-logind[1452]: New session 11 of user core. Nov 1 00:17:59.531105 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:17:59.623434 containerd[1465]: time="2025-11-01T00:17:59.623343435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:17:59.840323 containerd[1465]: time="2025-11-01T00:17:59.840124189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:18:00.176536 sshd[3833]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:00.180435 systemd[1]: sshd@10-10.0.0.38:22-10.0.0.1:45640.service: Deactivated successfully. Nov 1 00:18:00.182619 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:18:00.187907 containerd[1465]: time="2025-11-01T00:18:00.187036197Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:18:00.183296 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:18:00.184302 systemd-logind[1452]: Removed session 11. Nov 1 00:18:00.394549 containerd[1465]: time="2025-11-01T00:18:00.394473844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:18:00.395173 containerd[1465]: time="2025-11-01T00:18:00.395144260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.941167717s" Nov 1 00:18:00.395218 containerd[1465]: time="2025-11-01T00:18:00.395175569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:18:00.554594 containerd[1465]: time="2025-11-01T00:18:00.554524206Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:18:01.283788 containerd[1465]: time="2025-11-01T00:18:01.283643961Z" level=info msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\"" Nov 1 00:18:01.313084 containerd[1465]: time="2025-11-01T00:18:01.312991239Z" level=error msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\" failed" error="failed to destroy network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:18:01.313398 kubelet[2591]: E1101 00:18:01.313318 2591 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:18:01.313947 kubelet[2591]: E1101 00:18:01.313405 2591 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f"} Nov 1 00:18:01.313947 kubelet[2591]: E1101 00:18:01.313451 2591 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cddeab39-52b2-4e4d-8121-8c667fc57977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:18:01.313947 kubelet[2591]: E1101 00:18:01.313496 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cddeab39-52b2-4e4d-8121-8c667fc57977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:01.486545 containerd[1465]: time="2025-11-01T00:18:01.486391278Z" level=info msg="CreateContainer within sandbox \"6d2a430caa29e09edd2db2f90f192a6d33ffbb995c3d38f75f8e7acbb0c27278\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f09e34949c5b30f06740f7c1cfdc9ff240ce3c0e46cce514a408e7bc2044290e\"" Nov 1 00:18:01.490974 containerd[1465]: time="2025-11-01T00:18:01.489047731Z" level=info msg="StartContainer for \"f09e34949c5b30f06740f7c1cfdc9ff240ce3c0e46cce514a408e7bc2044290e\"" Nov 1 00:18:01.578672 systemd[1]: Started cri-containerd-f09e34949c5b30f06740f7c1cfdc9ff240ce3c0e46cce514a408e7bc2044290e.scope - libcontainer container f09e34949c5b30f06740f7c1cfdc9ff240ce3c0e46cce514a408e7bc2044290e. Nov 1 00:18:01.738564 containerd[1465]: time="2025-11-01T00:18:01.738499915Z" level=info msg="StartContainer for \"f09e34949c5b30f06740f7c1cfdc9ff240ce3c0e46cce514a408e7bc2044290e\" returns successfully" Nov 1 00:18:01.809440 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:18:01.812724 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:18:02.216928 containerd[1465]: time="2025-11-01T00:18:02.216834489Z" level=info msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" Nov 1 00:18:02.284385 containerd[1465]: time="2025-11-01T00:18:02.283987294Z" level=info msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" Nov 1 00:18:02.284385 containerd[1465]: time="2025-11-01T00:18:02.284233473Z" level=info msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" Nov 1 00:18:02.285009 containerd[1465]: time="2025-11-01T00:18:02.284764401Z" level=info msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" Nov 1 00:18:02.592384 kubelet[2591]: E1101 00:18:02.592336 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:02.616002 kubelet[2591]: I1101 00:18:02.615726 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8hlvg" podStartSLOduration=3.593009035 podStartE2EDuration="28.615703475s" podCreationTimestamp="2025-11-01 00:17:34 +0000 UTC" firstStartedPulling="2025-11-01 00:17:35.373117421 +0000 UTC m=+20.292114145" lastFinishedPulling="2025-11-01 00:18:00.395811861 +0000 UTC m=+45.314808585" observedRunningTime="2025-11-01 00:18:02.613359981 +0000 UTC m=+47.532356716" watchObservedRunningTime="2025-11-01 00:18:02.615703475 +0000 UTC m=+47.534700199" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.389 [INFO][3976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.389 [INFO][3976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" iface="eth0" netns="/var/run/netns/cni-99e695c6-204c-bf50-a988-fc5642ff6437" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.390 [INFO][3976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" iface="eth0" netns="/var/run/netns/cni-99e695c6-204c-bf50-a988-fc5642ff6437" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.390 [INFO][3976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" iface="eth0" netns="/var/run/netns/cni-99e695c6-204c-bf50-a988-fc5642ff6437" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.390 [INFO][3976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.390 [INFO][3976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.604 [INFO][4000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.605 [INFO][4000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.605 [INFO][4000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.617 [WARNING][4000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.618 [INFO][4000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.620 [INFO][4000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.630967 containerd[1465]: 2025-11-01 00:18:02.624 [INFO][3976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:02.634452 containerd[1465]: time="2025-11-01T00:18:02.633706402Z" level=info msg="TearDown network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" successfully" Nov 1 00:18:02.634688 containerd[1465]: time="2025-11-01T00:18:02.634659748Z" level=info msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" returns successfully" Nov 1 00:18:02.635718 systemd[1]: run-netns-cni\x2d99e695c6\x2d204c\x2dbf50\x2da988\x2dfc5642ff6437.mount: Deactivated successfully. Nov 1 00:18:02.639937 containerd[1465]: time="2025-11-01T00:18:02.639668986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gtbgf,Uid:d3f82561-0214-49cb-b635-63c7018b0ce5,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.401 [INFO][3973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.401 [INFO][3973] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" iface="eth0" netns="/var/run/netns/cni-f64bcbc4-79ae-8383-3ba8-c05904da4bd5" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.402 [INFO][3973] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" iface="eth0" netns="/var/run/netns/cni-f64bcbc4-79ae-8383-3ba8-c05904da4bd5" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.402 [INFO][3973] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" iface="eth0" netns="/var/run/netns/cni-f64bcbc4-79ae-8383-3ba8-c05904da4bd5" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.402 [INFO][3973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.402 [INFO][3973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.603 [INFO][4004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.606 [INFO][4004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.620 [INFO][4004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.630 [WARNING][4004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.630 [INFO][4004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.638 [INFO][4004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.647990 containerd[1465]: 2025-11-01 00:18:02.642 [INFO][3973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:02.651745 containerd[1465]: time="2025-11-01T00:18:02.651671788Z" level=info msg="TearDown network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" successfully" Nov 1 00:18:02.651745 containerd[1465]: time="2025-11-01T00:18:02.651715650Z" level=info msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" returns successfully" Nov 1 00:18:02.652607 systemd[1]: run-netns-cni\x2df64bcbc4\x2d79ae\x2d8383\x2d3ba8\x2dc05904da4bd5.mount: Deactivated successfully. Nov 1 00:18:02.653520 containerd[1465]: time="2025-11-01T00:18:02.653483573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ltj5c,Uid:ab5a5667-f558-4d28-9b68-0d3dbc43d636,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.424 [INFO][3975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.425 [INFO][3975] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" iface="eth0" netns="/var/run/netns/cni-931284ff-e1eb-006d-5f2f-eee3fb8949e1" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.425 [INFO][3975] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" iface="eth0" netns="/var/run/netns/cni-931284ff-e1eb-006d-5f2f-eee3fb8949e1" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.426 [INFO][3975] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" iface="eth0" netns="/var/run/netns/cni-931284ff-e1eb-006d-5f2f-eee3fb8949e1" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.426 [INFO][3975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.426 [INFO][3975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.609 [INFO][4006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.610 [INFO][4006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.639 [INFO][4006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.647 [WARNING][4006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.647 [INFO][4006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.651 [INFO][4006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.666265 containerd[1465]: 2025-11-01 00:18:02.656 [INFO][3975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:02.668892 containerd[1465]: time="2025-11-01T00:18:02.666490695Z" level=info msg="TearDown network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" successfully" Nov 1 00:18:02.668892 containerd[1465]: time="2025-11-01T00:18:02.666523607Z" level=info msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" returns successfully" Nov 1 00:18:02.671247 containerd[1465]: time="2025-11-01T00:18:02.670172101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdc77bbd4-cflc4,Uid:6970f73b-f9db-4e4e-ace1-ad25d9704f47,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:02.670980 systemd[1]: run-netns-cni\x2d931284ff\x2de1eb\x2d006d\x2d5f2f\x2deee3fb8949e1.mount: Deactivated successfully. Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.345 [INFO][3932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.350 [INFO][3932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" iface="eth0" netns="/var/run/netns/cni-3de07827-0d1c-c5b8-407b-efcd73de20ae" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.352 [INFO][3932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" iface="eth0" netns="/var/run/netns/cni-3de07827-0d1c-c5b8-407b-efcd73de20ae" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.362 [INFO][3932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" iface="eth0" netns="/var/run/netns/cni-3de07827-0d1c-c5b8-407b-efcd73de20ae" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.362 [INFO][3932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.362 [INFO][3932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.610 [INFO][3997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.610 [INFO][3997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.652 [INFO][3997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.660 [WARNING][3997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.660 [INFO][3997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.662 [INFO][3997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.680374 containerd[1465]: 2025-11-01 00:18:02.674 [INFO][3932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:02.680801 containerd[1465]: time="2025-11-01T00:18:02.680556509Z" level=info msg="TearDown network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" successfully" Nov 1 00:18:02.680801 containerd[1465]: time="2025-11-01T00:18:02.680582177Z" level=info msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" returns successfully" Nov 1 00:18:02.685054 systemd[1]: run-netns-cni\x2d3de07827\x2d0d1c\x2dc5b8\x2d407b\x2defcd73de20ae.mount: Deactivated successfully. Nov 1 00:18:02.762769 kubelet[2591]: I1101 00:18:02.762001 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80acf411-c481-42bf-9e90-d393893e1d60-whisker-backend-key-pair\") pod \"80acf411-c481-42bf-9e90-d393893e1d60\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " Nov 1 00:18:02.762964 kubelet[2591]: I1101 00:18:02.762820 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80acf411-c481-42bf-9e90-d393893e1d60-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "80acf411-c481-42bf-9e90-d393893e1d60" (UID: "80acf411-c481-42bf-9e90-d393893e1d60"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:18:02.763705 kubelet[2591]: I1101 00:18:02.763676 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80acf411-c481-42bf-9e90-d393893e1d60-whisker-ca-bundle\") pod \"80acf411-c481-42bf-9e90-d393893e1d60\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " Nov 1 00:18:02.763761 kubelet[2591]: I1101 00:18:02.763711 2591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh9tm\" (UniqueName: \"kubernetes.io/projected/80acf411-c481-42bf-9e90-d393893e1d60-kube-api-access-kh9tm\") pod \"80acf411-c481-42bf-9e90-d393893e1d60\" (UID: \"80acf411-c481-42bf-9e90-d393893e1d60\") " Nov 1 00:18:02.771508 kubelet[2591]: I1101 00:18:02.771442 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80acf411-c481-42bf-9e90-d393893e1d60-kube-api-access-kh9tm" (OuterVolumeSpecName: "kube-api-access-kh9tm") pod "80acf411-c481-42bf-9e90-d393893e1d60" (UID: "80acf411-c481-42bf-9e90-d393893e1d60"). InnerVolumeSpecName "kube-api-access-kh9tm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:18:02.772004 kubelet[2591]: I1101 00:18:02.771965 2591 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80acf411-c481-42bf-9e90-d393893e1d60-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "80acf411-c481-42bf-9e90-d393893e1d60" (UID: "80acf411-c481-42bf-9e90-d393893e1d60"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:18:02.838327 systemd-networkd[1404]: cali89476d61464: Link UP Nov 1 00:18:02.838624 systemd-networkd[1404]: cali89476d61464: Gained carrier Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.718 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.731 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0 calico-apiserver-84989fcb96- calico-apiserver d3f82561-0214-49cb-b635-63c7018b0ce5 1022 0 2025-11-01 00:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84989fcb96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84989fcb96-gtbgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89476d61464 [] [] }} ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.731 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.774 [INFO][4103] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" HandleID="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.775 [INFO][4103] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" HandleID="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325b50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84989fcb96-gtbgf", "timestamp":"2025-11-01 00:18:02.774838035 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.775 [INFO][4103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.775 [INFO][4103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.775 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.783 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.790 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.800 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.804 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.806 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.806 [INFO][4103] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.808 [INFO][4103] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204 Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.814 [INFO][4103] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.820 [INFO][4103] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.820 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" host="localhost" Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.821 [INFO][4103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.855473 containerd[1465]: 2025-11-01 00:18:02.821 [INFO][4103] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" HandleID="k8s-pod-network.b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.826 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f82561-0214-49cb-b635-63c7018b0ce5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84989fcb96-gtbgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89476d61464", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.827 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.827 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89476d61464 ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.839 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.839 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f82561-0214-49cb-b635-63c7018b0ce5", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204", Pod:"calico-apiserver-84989fcb96-gtbgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89476d61464", MAC:"82:b2:ae:ab:0c:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:02.856134 containerd[1465]: 2025-11-01 00:18:02.850 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gtbgf" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:02.864346 kubelet[2591]: I1101 00:18:02.864311 2591 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80acf411-c481-42bf-9e90-d393893e1d60-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:18:02.864346 kubelet[2591]: I1101 00:18:02.864344 2591 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80acf411-c481-42bf-9e90-d393893e1d60-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:18:02.864476 kubelet[2591]: I1101 00:18:02.864353 2591 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kh9tm\" (UniqueName: \"kubernetes.io/projected/80acf411-c481-42bf-9e90-d393893e1d60-kube-api-access-kh9tm\") on node \"localhost\" DevicePath \"\"" Nov 1 00:18:02.893494 containerd[1465]: time="2025-11-01T00:18:02.892810491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:02.893494 containerd[1465]: time="2025-11-01T00:18:02.892937858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:02.893494 containerd[1465]: time="2025-11-01T00:18:02.892954258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:02.893494 containerd[1465]: time="2025-11-01T00:18:02.893042974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:02.915072 systemd[1]: Started cri-containerd-b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204.scope - libcontainer container b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204. Nov 1 00:18:02.930400 systemd-networkd[1404]: cali322a4b2d395: Link UP Nov 1 00:18:02.930685 systemd-networkd[1404]: cali322a4b2d395: Gained carrier Nov 1 00:18:02.932608 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.739 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.762 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0 calico-kube-controllers-6fdc77bbd4- calico-system 6970f73b-f9db-4e4e-ace1-ad25d9704f47 1024 0 2025-11-01 00:17:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fdc77bbd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fdc77bbd4-cflc4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali322a4b2d395 [] [] }} ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.762 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.797 [INFO][4126] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" HandleID="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.798 [INFO][4126] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" HandleID="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fdc77bbd4-cflc4", "timestamp":"2025-11-01 00:18:02.79794696 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.798 [INFO][4126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.821 [INFO][4126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.821 [INFO][4126] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.883 [INFO][4126] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.893 [INFO][4126] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.898 [INFO][4126] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.900 [INFO][4126] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.902 [INFO][4126] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.902 [INFO][4126] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.904 [INFO][4126] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.908 [INFO][4126] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.918 [INFO][4126] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.918 [INFO][4126] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" host="localhost" Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.918 [INFO][4126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:02.948842 containerd[1465]: 2025-11-01 00:18:02.918 [INFO][4126] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" HandleID="k8s-pod-network.bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.928 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0", GenerateName:"calico-kube-controllers-6fdc77bbd4-", Namespace:"calico-system", SelfLink:"", UID:"6970f73b-f9db-4e4e-ace1-ad25d9704f47", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdc77bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fdc77bbd4-cflc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali322a4b2d395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.928 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.928 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali322a4b2d395 ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.930 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.931 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0", GenerateName:"calico-kube-controllers-6fdc77bbd4-", Namespace:"calico-system", SelfLink:"", UID:"6970f73b-f9db-4e4e-ace1-ad25d9704f47", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdc77bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb", Pod:"calico-kube-controllers-6fdc77bbd4-cflc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali322a4b2d395", MAC:"66:53:01:72:a1:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:02.950380 containerd[1465]: 2025-11-01 00:18:02.945 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb" Namespace="calico-system" Pod="calico-kube-controllers-6fdc77bbd4-cflc4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:02.969108 containerd[1465]: time="2025-11-01T00:18:02.969046663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gtbgf,Uid:d3f82561-0214-49cb-b635-63c7018b0ce5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204\"" Nov 1 00:18:02.972634 containerd[1465]: time="2025-11-01T00:18:02.972542282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:02.973708 containerd[1465]: time="2025-11-01T00:18:02.973610081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:02.973835 containerd[1465]: time="2025-11-01T00:18:02.973707983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:02.973835 containerd[1465]: time="2025-11-01T00:18:02.973750051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:02.974660 containerd[1465]: time="2025-11-01T00:18:02.973963719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:03.004097 systemd[1]: Started cri-containerd-bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb.scope - libcontainer container bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb. Nov 1 00:18:03.032906 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:03.080801 containerd[1465]: time="2025-11-01T00:18:03.080721925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdc77bbd4-cflc4,Uid:6970f73b-f9db-4e4e-ace1-ad25d9704f47,Namespace:calico-system,Attempt:1,} returns sandbox id \"bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb\"" Nov 1 00:18:03.084215 systemd-networkd[1404]: calid497ffdf723: Link UP Nov 1 00:18:03.086072 systemd-networkd[1404]: calid497ffdf723: Gained carrier Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.736 [INFO][4061] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.750 [INFO][4061] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--ltj5c-eth0 goldmane-666569f655- calico-system ab5a5667-f558-4d28-9b68-0d3dbc43d636 1023 0 2025-11-01 00:17:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-ltj5c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid497ffdf723 [] [] }} ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.750 [INFO][4061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.822 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" HandleID="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.822 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" HandleID="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000283d00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-ltj5c", "timestamp":"2025-11-01 00:18:02.822045989 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.822 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.919 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.919 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:02.985 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.034 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.041 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.045 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.050 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.050 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.053 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.060 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.074 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.074 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" host="localhost" Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.074 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:03.103184 containerd[1465]: 2025-11-01 00:18:03.074 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" HandleID="k8s-pod-network.5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.079 [INFO][4061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ltj5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab5a5667-f558-4d28-9b68-0d3dbc43d636", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-ltj5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid497ffdf723", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.080 [INFO][4061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.080 [INFO][4061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid497ffdf723 ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.086 [INFO][4061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.087 [INFO][4061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ltj5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab5a5667-f558-4d28-9b68-0d3dbc43d636", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a", Pod:"goldmane-666569f655-ltj5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid497ffdf723", MAC:"d2:32:72:c0:62:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:03.103838 containerd[1465]: 2025-11-01 00:18:03.099 [INFO][4061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a" Namespace="calico-system" Pod="goldmane-666569f655-ltj5c" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:03.159940 containerd[1465]: time="2025-11-01T00:18:03.159569844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:03.159940 containerd[1465]: time="2025-11-01T00:18:03.159765619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:03.159940 containerd[1465]: time="2025-11-01T00:18:03.159784615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:03.160115 containerd[1465]: time="2025-11-01T00:18:03.159942629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:03.184140 systemd[1]: Started cri-containerd-5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a.scope - libcontainer container 5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a. Nov 1 00:18:03.201467 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:03.228473 containerd[1465]: time="2025-11-01T00:18:03.228421573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ltj5c,Uid:ab5a5667-f558-4d28-9b68-0d3dbc43d636,Namespace:calico-system,Attempt:1,} returns sandbox id \"5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a\"" Nov 1 00:18:03.296556 systemd[1]: Removed slice kubepods-besteffort-pod80acf411_c481_42bf_9e90_d393893e1d60.slice - libcontainer container kubepods-besteffort-pod80acf411_c481_42bf_9e90_d393893e1d60.slice. Nov 1 00:18:03.354116 containerd[1465]: time="2025-11-01T00:18:03.354052901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:03.375463 containerd[1465]: time="2025-11-01T00:18:03.355513534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:03.375598 containerd[1465]: time="2025-11-01T00:18:03.355618680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:03.375884 kubelet[2591]: E1101 00:18:03.375799 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:03.375962 kubelet[2591]: E1101 00:18:03.375915 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:03.376482 containerd[1465]: time="2025-11-01T00:18:03.376425728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:03.376532 kubelet[2591]: E1101 00:18:03.376264 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gtbgf_calico-apiserver(d3f82561-0214-49cb-b635-63c7018b0ce5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:03.378003 kubelet[2591]: E1101 00:18:03.377922 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:03.603301 kubelet[2591]: E1101 00:18:03.603230 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:03.621733 kubelet[2591]: E1101 00:18:03.619666 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:03.645976 systemd[1]: var-lib-kubelet-pods-80acf411\x2dc481\x2d42bf\x2d9e90\x2dd393893e1d60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkh9tm.mount: Deactivated successfully. Nov 1 00:18:03.646592 systemd[1]: var-lib-kubelet-pods-80acf411\x2dc481\x2d42bf\x2d9e90\x2dd393893e1d60-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:18:03.693200 containerd[1465]: time="2025-11-01T00:18:03.693141280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:03.850153 containerd[1465]: time="2025-11-01T00:18:03.850039692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:03.850153 containerd[1465]: time="2025-11-01T00:18:03.850111014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:03.850399 kubelet[2591]: E1101 00:18:03.850348 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:03.850506 kubelet[2591]: E1101 00:18:03.850409 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:03.850748 kubelet[2591]: E1101 00:18:03.850676 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fdc77bbd4-cflc4_calico-system(6970f73b-f9db-4e4e-ace1-ad25d9704f47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:03.850927 containerd[1465]: time="2025-11-01T00:18:03.850749323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:03.852217 kubelet[2591]: E1101 00:18:03.852160 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:04.201490 containerd[1465]: time="2025-11-01T00:18:04.201428982Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:04.240755 containerd[1465]: time="2025-11-01T00:18:04.240669490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:04.240977 containerd[1465]: time="2025-11-01T00:18:04.240728359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:04.241474 kubelet[2591]: E1101 00:18:04.241122 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:04.241474 kubelet[2591]: E1101 00:18:04.241193 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:04.241474 kubelet[2591]: E1101 00:18:04.241411 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ltj5c_calico-system(ab5a5667-f558-4d28-9b68-0d3dbc43d636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:04.243182 kubelet[2591]: E1101 00:18:04.243132 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:04.275051 kernel: bpftool[4438]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:18:04.282483 containerd[1465]: time="2025-11-01T00:18:04.282431619Z" level=info msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" Nov 1 00:18:04.282483 containerd[1465]: time="2025-11-01T00:18:04.282484749Z" level=info msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" Nov 1 00:18:04.479942 kubelet[2591]: I1101 00:18:04.479593 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwmm2\" (UniqueName: \"kubernetes.io/projected/1d99ca26-0cda-4b97-b45f-ad18f38bfeae-kube-api-access-jwmm2\") pod \"whisker-cfbf4bb6d-hrb7l\" (UID: \"1d99ca26-0cda-4b97-b45f-ad18f38bfeae\") " pod="calico-system/whisker-cfbf4bb6d-hrb7l" Nov 1 00:18:04.479942 kubelet[2591]: I1101 00:18:04.479701 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d99ca26-0cda-4b97-b45f-ad18f38bfeae-whisker-backend-key-pair\") pod \"whisker-cfbf4bb6d-hrb7l\" (UID: \"1d99ca26-0cda-4b97-b45f-ad18f38bfeae\") " pod="calico-system/whisker-cfbf4bb6d-hrb7l" Nov 1 00:18:04.479942 kubelet[2591]: I1101 00:18:04.479740 2591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d99ca26-0cda-4b97-b45f-ad18f38bfeae-whisker-ca-bundle\") pod \"whisker-cfbf4bb6d-hrb7l\" (UID: \"1d99ca26-0cda-4b97-b45f-ad18f38bfeae\") " pod="calico-system/whisker-cfbf4bb6d-hrb7l" Nov 1 00:18:04.483925 systemd[1]: Created slice kubepods-besteffort-pod1d99ca26_0cda_4b97_b45f_ad18f38bfeae.slice - libcontainer container kubepods-besteffort-pod1d99ca26_0cda_4b97_b45f_ad18f38bfeae.slice. Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.424 [INFO][4455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.425 [INFO][4455] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" iface="eth0" netns="/var/run/netns/cni-b03c4d72-f76e-ef55-3fc5-56bd4fc1ae7e" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.426 [INFO][4455] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" iface="eth0" netns="/var/run/netns/cni-b03c4d72-f76e-ef55-3fc5-56bd4fc1ae7e" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.426 [INFO][4455] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" iface="eth0" netns="/var/run/netns/cni-b03c4d72-f76e-ef55-3fc5-56bd4fc1ae7e" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.427 [INFO][4455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.427 [INFO][4455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.473 [INFO][4476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.473 [INFO][4476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.474 [INFO][4476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.489 [WARNING][4476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.490 [INFO][4476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.495 [INFO][4476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:04.504194 containerd[1465]: 2025-11-01 00:18:04.501 [INFO][4455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:04.506317 containerd[1465]: time="2025-11-01T00:18:04.504422443Z" level=info msg="TearDown network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" successfully" Nov 1 00:18:04.506317 containerd[1465]: time="2025-11-01T00:18:04.504451566Z" level=info msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" returns successfully" Nov 1 00:18:04.506317 containerd[1465]: time="2025-11-01T00:18:04.505229457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6lhx,Uid:d94b4da9-d4a7-4f92-8ec6-90e45ff748b8,Namespace:kube-system,Attempt:1,}" Nov 1 00:18:04.506409 kubelet[2591]: E1101 00:18:04.504824 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:04.507847 systemd[1]: run-netns-cni\x2db03c4d72\x2df76e\x2def55\x2d3fc5\x2d56bd4fc1ae7e.mount: Deactivated successfully. Nov 1 00:18:04.617637 systemd-networkd[1404]: cali322a4b2d395: Gained IPv6LL Nov 1 00:18:04.619016 systemd-networkd[1404]: vxlan.calico: Link UP Nov 1 00:18:04.619021 systemd-networkd[1404]: vxlan.calico: Gained carrier Nov 1 00:18:04.627516 kubelet[2591]: E1101 00:18:04.627470 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:04.628204 kubelet[2591]: E1101 00:18:04.627704 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:04.628665 kubelet[2591]: E1101 00:18:04.628628 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:04.745091 systemd-networkd[1404]: cali89476d61464: Gained IPv6LL Nov 1 00:18:04.809126 systemd-networkd[1404]: calid497ffdf723: Gained IPv6LL Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.426 [INFO][4465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.426 [INFO][4465] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" iface="eth0" netns="/var/run/netns/cni-fac579ef-be29-7f6f-533a-c9aadb9ed347" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.429 [INFO][4465] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" iface="eth0" netns="/var/run/netns/cni-fac579ef-be29-7f6f-533a-c9aadb9ed347" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.431 [INFO][4465] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" iface="eth0" netns="/var/run/netns/cni-fac579ef-be29-7f6f-533a-c9aadb9ed347" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.431 [INFO][4465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.431 [INFO][4465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.470 [INFO][4478] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.470 [INFO][4478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.496 [INFO][4478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.583 [WARNING][4478] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.584 [INFO][4478] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.867 [INFO][4478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:04.875842 containerd[1465]: 2025-11-01 00:18:04.872 [INFO][4465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:04.876427 containerd[1465]: time="2025-11-01T00:18:04.876081911Z" level=info msg="TearDown network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" successfully" Nov 1 00:18:04.876427 containerd[1465]: time="2025-11-01T00:18:04.876118058Z" level=info msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" returns successfully" Nov 1 00:18:04.880539 kubelet[2591]: E1101 00:18:04.879201 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:04.880689 containerd[1465]: time="2025-11-01T00:18:04.880157813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqnhp,Uid:6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4,Namespace:kube-system,Attempt:1,}" Nov 1 00:18:04.879714 systemd[1]: run-netns-cni\x2dfac579ef\x2dbe29\x2d7f6f\x2d533a\x2dc9aadb9ed347.mount: Deactivated successfully. Nov 1 00:18:05.189457 systemd[1]: Started sshd@11-10.0.0.38:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Nov 1 00:18:05.265162 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:05.268351 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:05.279761 systemd-logind[1452]: New session 12 of user core. Nov 1 00:18:05.284517 kubelet[2591]: I1101 00:18:05.284469 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80acf411-c481-42bf-9e90-d393893e1d60" path="/var/lib/kubelet/pods/80acf411-c481-42bf-9e90-d393893e1d60/volumes" Nov 1 00:18:05.285377 containerd[1465]: time="2025-11-01T00:18:05.285309872Z" level=info msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" Nov 1 00:18:05.287051 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:18:05.388643 containerd[1465]: time="2025-11-01T00:18:05.388359888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cfbf4bb6d-hrb7l,Uid:1d99ca26-0cda-4b97-b45f-ad18f38bfeae,Namespace:calico-system,Attempt:0,}" Nov 1 00:18:05.597257 sshd[4572]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:05.604385 systemd[1]: sshd@11-10.0.0.38:22-10.0.0.1:45652.service: Deactivated successfully. Nov 1 00:18:05.606659 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:18:05.611446 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:18:05.612569 systemd-logind[1452]: Removed session 12. Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" iface="eth0" netns="/var/run/netns/cni-6efaa559-7607-2532-c4bf-057fb5c3afcd" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" iface="eth0" netns="/var/run/netns/cni-6efaa559-7607-2532-c4bf-057fb5c3afcd" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" iface="eth0" netns="/var/run/netns/cni-6efaa559-7607-2532-c4bf-057fb5c3afcd" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.021 [INFO][4587] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.043 [INFO][4624] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.044 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.044 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.195 [WARNING][4624] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.195 [INFO][4624] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.198 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:06.206257 containerd[1465]: 2025-11-01 00:18:06.202 [INFO][4587] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:06.214152 containerd[1465]: time="2025-11-01T00:18:06.212329824Z" level=info msg="TearDown network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" successfully" Nov 1 00:18:06.214152 containerd[1465]: time="2025-11-01T00:18:06.212378004Z" level=info msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" returns successfully" Nov 1 00:18:06.213410 systemd[1]: run-netns-cni\x2d6efaa559\x2d7607\x2d2532\x2dc4bf\x2d057fb5c3afcd.mount: Deactivated successfully. Nov 1 00:18:06.217511 containerd[1465]: time="2025-11-01T00:18:06.217464885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gd9wk,Uid:d3a2d948-d842-45c9-8a49-ba664ed2926c,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:18:06.284969 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Nov 1 00:18:06.324536 systemd-networkd[1404]: calibead52a0638: Link UP Nov 1 00:18:06.325277 systemd-networkd[1404]: calibead52a0638: Gained carrier Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.023 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0 coredns-674b8bbfcf- kube-system d94b4da9-d4a7-4f92-8ec6-90e45ff748b8 1080 0 2025-11-01 00:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b6lhx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibead52a0638 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.023 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.232 [INFO][4633] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" HandleID="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.233 [INFO][4633] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" HandleID="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c78f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b6lhx", "timestamp":"2025-11-01 00:18:06.232702285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.233 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.233 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.233 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.242 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.250 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.258 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.261 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.268 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.268 [INFO][4633] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.270 [INFO][4633] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390 Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.283 [INFO][4633] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.302 [INFO][4633] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.303 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" host="localhost" Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.303 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:06.368055 containerd[1465]: 2025-11-01 00:18:06.303 [INFO][4633] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" HandleID="k8s-pod-network.565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.317 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b6lhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibead52a0638", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.317 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.317 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibead52a0638 ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.326 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.332 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390", Pod:"coredns-674b8bbfcf-b6lhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibead52a0638", MAC:"8a:b3:c8:9f:fb:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.369760 containerd[1465]: 2025-11-01 00:18:06.349 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6lhx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:06.426615 containerd[1465]: time="2025-11-01T00:18:06.426008111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:06.426615 containerd[1465]: time="2025-11-01T00:18:06.426166778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:06.426615 containerd[1465]: time="2025-11-01T00:18:06.426184180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.426615 containerd[1465]: time="2025-11-01T00:18:06.426346444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.441607 systemd-networkd[1404]: cali4076f59a887: Link UP Nov 1 00:18:06.442988 systemd-networkd[1404]: cali4076f59a887: Gained carrier Nov 1 00:18:06.459573 systemd[1]: Started cri-containerd-565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390.scope - libcontainer container 565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390. Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.306 [INFO][4648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0 whisker-cfbf4bb6d- calico-system 1d99ca26-0cda-4b97-b45f-ad18f38bfeae 1085 0 2025-11-01 00:18:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cfbf4bb6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cfbf4bb6d-hrb7l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4076f59a887 [] [] }} ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.306 [INFO][4648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.354 [INFO][4681] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" HandleID="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Workload="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.354 [INFO][4681] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" HandleID="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Workload="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c0d20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cfbf4bb6d-hrb7l", "timestamp":"2025-11-01 00:18:06.353987214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.354 [INFO][4681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.354 [INFO][4681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.354 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.362 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.374 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.383 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.387 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.395 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.397 [INFO][4681] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.403 [INFO][4681] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1 Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.413 [INFO][4681] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4681] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" host="localhost" Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:06.462250 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4681] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" HandleID="k8s-pod-network.e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Workload="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.430 [INFO][4648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0", GenerateName:"whisker-cfbf4bb6d-", Namespace:"calico-system", SelfLink:"", UID:"1d99ca26-0cda-4b97-b45f-ad18f38bfeae", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cfbf4bb6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cfbf4bb6d-hrb7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4076f59a887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.430 [INFO][4648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.430 [INFO][4648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4076f59a887 ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.443 [INFO][4648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.444 [INFO][4648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0", GenerateName:"whisker-cfbf4bb6d-", Namespace:"calico-system", SelfLink:"", UID:"1d99ca26-0cda-4b97-b45f-ad18f38bfeae", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cfbf4bb6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1", Pod:"whisker-cfbf4bb6d-hrb7l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4076f59a887", MAC:"4a:39:28:e8:1f:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.463479 containerd[1465]: 2025-11-01 00:18:06.457 [INFO][4648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1" Namespace="calico-system" Pod="whisker-cfbf4bb6d-hrb7l" WorkloadEndpoint="localhost-k8s-whisker--cfbf4bb6d--hrb7l-eth0" Nov 1 00:18:06.491820 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:06.509830 containerd[1465]: time="2025-11-01T00:18:06.509394697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:06.509830 containerd[1465]: time="2025-11-01T00:18:06.509487871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:06.509830 containerd[1465]: time="2025-11-01T00:18:06.509501927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.509830 containerd[1465]: time="2025-11-01T00:18:06.509597706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.535697 systemd[1]: Started cri-containerd-e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1.scope - libcontainer container e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1. Nov 1 00:18:06.541793 containerd[1465]: time="2025-11-01T00:18:06.541711236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6lhx,Uid:d94b4da9-d4a7-4f92-8ec6-90e45ff748b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390\"" Nov 1 00:18:06.543275 kubelet[2591]: E1101 00:18:06.543228 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:06.553568 systemd-networkd[1404]: cali660b501ece0: Link UP Nov 1 00:18:06.562828 containerd[1465]: time="2025-11-01T00:18:06.558580544Z" level=info msg="CreateContainer within sandbox \"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:18:06.559376 systemd-networkd[1404]: cali660b501ece0: Gained carrier Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.307 [INFO][4641] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0 coredns-674b8bbfcf- kube-system 6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4 1081 0 2025-11-01 00:17:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jqnhp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali660b501ece0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.308 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.390 [INFO][4683] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" HandleID="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.400 [INFO][4683] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" HandleID="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jqnhp", "timestamp":"2025-11-01 00:18:06.390219868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.401 [INFO][4683] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4683] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.426 [INFO][4683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.463 [INFO][4683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.471 [INFO][4683] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.485 [INFO][4683] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.490 [INFO][4683] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.494 [INFO][4683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.494 [INFO][4683] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.502 [INFO][4683] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604 Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.509 [INFO][4683] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.529 [INFO][4683] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.529 [INFO][4683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" host="localhost" Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.529 [INFO][4683] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:06.603778 containerd[1465]: 2025-11-01 00:18:06.529 [INFO][4683] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" HandleID="k8s-pod-network.515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.539 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jqnhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali660b501ece0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.540 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.540 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali660b501ece0 ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.558 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.560 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604", Pod:"coredns-674b8bbfcf-jqnhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali660b501ece0", MAC:"3e:fe:83:dd:b5:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.604660 containerd[1465]: 2025-11-01 00:18:06.582 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqnhp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:06.642547 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:06.650102 containerd[1465]: time="2025-11-01T00:18:06.645082657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:06.650102 containerd[1465]: time="2025-11-01T00:18:06.645182173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:06.650102 containerd[1465]: time="2025-11-01T00:18:06.645194266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.650102 containerd[1465]: time="2025-11-01T00:18:06.645313298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.690934 systemd-networkd[1404]: calie54eb177f1c: Link UP Nov 1 00:18:06.693543 systemd-networkd[1404]: calie54eb177f1c: Gained carrier Nov 1 00:18:06.695625 containerd[1465]: time="2025-11-01T00:18:06.695529300Z" level=info msg="CreateContainer within sandbox \"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d092c7e64fb19a9248fdc56a2b374791a5fe50ad7908ebb1739daf0807de6527\"" Nov 1 00:18:06.699317 containerd[1465]: time="2025-11-01T00:18:06.699246351Z" level=info msg="StartContainer for \"d092c7e64fb19a9248fdc56a2b374791a5fe50ad7908ebb1739daf0807de6527\"" Nov 1 00:18:06.705623 systemd[1]: Started cri-containerd-515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604.scope - libcontainer container 515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604. Nov 1 00:18:06.713372 containerd[1465]: time="2025-11-01T00:18:06.713129011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cfbf4bb6d-hrb7l,Uid:1d99ca26-0cda-4b97-b45f-ad18f38bfeae,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5e4885b8c8266143bd190019e088a8a17de82e11accde53ebbca2a1e6ca89f1\"" Nov 1 00:18:06.717881 containerd[1465]: time="2025-11-01T00:18:06.717259786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.373 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0 calico-apiserver-84989fcb96- calico-apiserver d3a2d948-d842-45c9-8a49-ba664ed2926c 1106 0 2025-11-01 00:17:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84989fcb96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84989fcb96-gd9wk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie54eb177f1c [] [] }} ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.374 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.423 [INFO][4708] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" HandleID="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.424 [INFO][4708] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" HandleID="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b5270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84989fcb96-gd9wk", "timestamp":"2025-11-01 00:18:06.423733564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.424 [INFO][4708] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.530 [INFO][4708] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.530 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.579 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.598 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.604 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.612 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.622 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.623 [INFO][4708] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.629 [INFO][4708] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.643 [INFO][4708] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.655 [INFO][4708] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.655 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" host="localhost" Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.655 [INFO][4708] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:06.719127 containerd[1465]: 2025-11-01 00:18:06.655 [INFO][4708] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" HandleID="k8s-pod-network.858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.668 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2d948-d842-45c9-8a49-ba664ed2926c", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84989fcb96-gd9wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54eb177f1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.669 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.669 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie54eb177f1c ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.692 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.693 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2d948-d842-45c9-8a49-ba664ed2926c", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e", Pod:"calico-apiserver-84989fcb96-gd9wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54eb177f1c", MAC:"1e:22:0b:8a:2c:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:06.720458 containerd[1465]: 2025-11-01 00:18:06.707 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e" Namespace="calico-apiserver" Pod="calico-apiserver-84989fcb96-gd9wk" WorkloadEndpoint="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:06.743306 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:06.752154 systemd[1]: Started cri-containerd-d092c7e64fb19a9248fdc56a2b374791a5fe50ad7908ebb1739daf0807de6527.scope - libcontainer container d092c7e64fb19a9248fdc56a2b374791a5fe50ad7908ebb1739daf0807de6527. Nov 1 00:18:06.758812 containerd[1465]: time="2025-11-01T00:18:06.758412419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:06.758812 containerd[1465]: time="2025-11-01T00:18:06.758521492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:06.758812 containerd[1465]: time="2025-11-01T00:18:06.758554975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.758812 containerd[1465]: time="2025-11-01T00:18:06.758692642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:06.789389 systemd[1]: Started cri-containerd-858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e.scope - libcontainer container 858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e. Nov 1 00:18:06.816298 containerd[1465]: time="2025-11-01T00:18:06.816228605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqnhp,Uid:6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4,Namespace:kube-system,Attempt:1,} returns sandbox id \"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604\"" Nov 1 00:18:06.818438 kubelet[2591]: E1101 00:18:06.818406 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:06.837508 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:06.946794 containerd[1465]: time="2025-11-01T00:18:06.946738805Z" level=info msg="CreateContainer within sandbox \"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:18:06.981379 containerd[1465]: time="2025-11-01T00:18:06.981183639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84989fcb96-gd9wk,Uid:d3a2d948-d842-45c9-8a49-ba664ed2926c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e\"" Nov 1 00:18:06.981379 containerd[1465]: time="2025-11-01T00:18:06.981184852Z" level=info msg="StartContainer for \"d092c7e64fb19a9248fdc56a2b374791a5fe50ad7908ebb1739daf0807de6527\" returns successfully" Nov 1 00:18:07.014141 containerd[1465]: time="2025-11-01T00:18:07.014062055Z" level=info msg="CreateContainer within sandbox \"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3872f9a7fa3c627bb53d97a0c8f241a82d766cd6444d020d664a06417cf1fe8\"" Nov 1 00:18:07.016123 containerd[1465]: time="2025-11-01T00:18:07.015075959Z" level=info msg="StartContainer for \"a3872f9a7fa3c627bb53d97a0c8f241a82d766cd6444d020d664a06417cf1fe8\"" Nov 1 00:18:07.043069 containerd[1465]: time="2025-11-01T00:18:07.043021701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:07.047062 systemd[1]: Started cri-containerd-a3872f9a7fa3c627bb53d97a0c8f241a82d766cd6444d020d664a06417cf1fe8.scope - libcontainer container a3872f9a7fa3c627bb53d97a0c8f241a82d766cd6444d020d664a06417cf1fe8. Nov 1 00:18:07.054032 containerd[1465]: time="2025-11-01T00:18:07.053961427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:07.054154 containerd[1465]: time="2025-11-01T00:18:07.054086520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:07.054336 kubelet[2591]: E1101 00:18:07.054286 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:07.054401 kubelet[2591]: E1101 00:18:07.054340 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:07.054581 kubelet[2591]: E1101 00:18:07.054542 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d083028835b047a397c3176e571d04eb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:07.055480 containerd[1465]: time="2025-11-01T00:18:07.055451572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:07.105915 containerd[1465]: time="2025-11-01T00:18:07.105832866Z" level=info msg="StartContainer for \"a3872f9a7fa3c627bb53d97a0c8f241a82d766cd6444d020d664a06417cf1fe8\" returns successfully" Nov 1 00:18:07.353462 containerd[1465]: time="2025-11-01T00:18:07.353290738Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:07.428583 containerd[1465]: time="2025-11-01T00:18:07.428474357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:07.428583 containerd[1465]: time="2025-11-01T00:18:07.428474727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:07.428869 kubelet[2591]: E1101 00:18:07.428820 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:07.428947 kubelet[2591]: E1101 00:18:07.428897 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:07.429194 kubelet[2591]: E1101 00:18:07.429132 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2zg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gd9wk_calico-apiserver(d3a2d948-d842-45c9-8a49-ba664ed2926c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:07.429577 containerd[1465]: time="2025-11-01T00:18:07.429523998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:07.430627 kubelet[2591]: E1101 00:18:07.430589 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:07.713163 kubelet[2591]: E1101 00:18:07.712982 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:07.722030 kubelet[2591]: E1101 00:18:07.721068 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:07.729191 kubelet[2591]: E1101 00:18:07.728277 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:07.731851 kubelet[2591]: I1101 00:18:07.731785 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b6lhx" podStartSLOduration=48.731762617 podStartE2EDuration="48.731762617s" podCreationTimestamp="2025-11-01 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:07.731153549 +0000 UTC m=+52.650150273" watchObservedRunningTime="2025-11-01 00:18:07.731762617 +0000 UTC m=+52.650759341" Nov 1 00:18:07.749880 kubelet[2591]: I1101 00:18:07.749614 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jqnhp" podStartSLOduration=48.749586741 podStartE2EDuration="48.749586741s" podCreationTimestamp="2025-11-01 00:17:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:07.748585048 +0000 UTC m=+52.667581772" watchObservedRunningTime="2025-11-01 00:18:07.749586741 +0000 UTC m=+52.668583475" Nov 1 00:18:07.753122 systemd-networkd[1404]: calibead52a0638: Gained IPv6LL Nov 1 00:18:07.754128 systemd-networkd[1404]: cali660b501ece0: Gained IPv6LL Nov 1 00:18:07.781088 containerd[1465]: time="2025-11-01T00:18:07.781021292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:07.803712 containerd[1465]: time="2025-11-01T00:18:07.803561791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:07.803712 containerd[1465]: time="2025-11-01T00:18:07.803643002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:07.803931 kubelet[2591]: E1101 00:18:07.803824 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:07.803931 kubelet[2591]: E1101 00:18:07.803896 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:07.804103 kubelet[2591]: E1101 00:18:07.804037 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:07.805526 kubelet[2591]: E1101 00:18:07.805468 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cfbf4bb6d-hrb7l" podUID="1d99ca26-0cda-4b97-b45f-ad18f38bfeae" Nov 1 00:18:08.137111 systemd-networkd[1404]: calie54eb177f1c: Gained IPv6LL Nov 1 00:18:08.458217 systemd-networkd[1404]: cali4076f59a887: Gained IPv6LL Nov 1 00:18:08.732101 kubelet[2591]: E1101 00:18:08.731626 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:08.732101 kubelet[2591]: E1101 00:18:08.731746 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:08.734338 kubelet[2591]: E1101 00:18:08.733747 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:08.734766 kubelet[2591]: E1101 00:18:08.734731 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cfbf4bb6d-hrb7l" podUID="1d99ca26-0cda-4b97-b45f-ad18f38bfeae" Nov 1 00:18:09.735216 kubelet[2591]: E1101 00:18:09.735154 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:09.735937 kubelet[2591]: E1101 00:18:09.735561 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:10.616083 systemd[1]: Started sshd@12-10.0.0.38:22-10.0.0.1:45714.service - OpenSSH per-connection server daemon (10.0.0.1:45714). Nov 1 00:18:10.696692 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 45714 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:10.699178 sshd[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:10.705360 systemd-logind[1452]: New session 13 of user core. Nov 1 00:18:10.714223 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:18:10.879158 sshd[5006]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:10.884545 systemd[1]: sshd@12-10.0.0.38:22-10.0.0.1:45714.service: Deactivated successfully. Nov 1 00:18:10.887788 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:18:10.888577 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:18:10.889816 systemd-logind[1452]: Removed session 13. Nov 1 00:18:15.273040 containerd[1465]: time="2025-11-01T00:18:15.272981286Z" level=info msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" Nov 1 00:18:15.291174 containerd[1465]: time="2025-11-01T00:18:15.291124787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.336 [WARNING][5039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ltj5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab5a5667-f558-4d28-9b68-0d3dbc43d636", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a", Pod:"goldmane-666569f655-ltj5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid497ffdf723", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.337 [INFO][5039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.337 [INFO][5039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" iface="eth0" netns="" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.337 [INFO][5039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.337 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.369 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.370 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.370 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.376 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.376 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.378 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.386398 containerd[1465]: 2025-11-01 00:18:15.381 [INFO][5039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.387453 containerd[1465]: time="2025-11-01T00:18:15.386498231Z" level=info msg="TearDown network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" successfully" Nov 1 00:18:15.387453 containerd[1465]: time="2025-11-01T00:18:15.386536623Z" level=info msg="StopPodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" returns successfully" Nov 1 00:18:15.387965 containerd[1465]: time="2025-11-01T00:18:15.387905663Z" level=info msg="RemovePodSandbox for \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" Nov 1 00:18:15.390823 containerd[1465]: time="2025-11-01T00:18:15.390771305Z" level=info msg="Forcibly stopping sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\"" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.435 [WARNING][5066] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--ltj5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab5a5667-f558-4d28-9b68-0d3dbc43d636", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d5b006788c2cef5a203ca3806f5587dea5a7e945d58306c5d5f5db803caf59a", Pod:"goldmane-666569f655-ltj5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid497ffdf723", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.438 [INFO][5066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.438 [INFO][5066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" iface="eth0" netns="" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.438 [INFO][5066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.438 [INFO][5066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.466 [INFO][5075] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.466 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.466 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.476 [WARNING][5075] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.476 [INFO][5075] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" HandleID="k8s-pod-network.b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Workload="localhost-k8s-goldmane--666569f655--ltj5c-eth0" Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.478 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.484408 containerd[1465]: 2025-11-01 00:18:15.481 [INFO][5066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c" Nov 1 00:18:15.484408 containerd[1465]: time="2025-11-01T00:18:15.484266712Z" level=info msg="TearDown network for sandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" successfully" Nov 1 00:18:15.492849 containerd[1465]: time="2025-11-01T00:18:15.492778816Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:15.492849 containerd[1465]: time="2025-11-01T00:18:15.492908390Z" level=info msg="RemovePodSandbox \"b8a6e39741b425b48dfd5bfd8a9c41989be0ad35e815b15c53dd2b72b39c3b7c\" returns successfully" Nov 1 00:18:15.493772 containerd[1465]: time="2025-11-01T00:18:15.493723721Z" level=info msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.537 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0", GenerateName:"calico-kube-controllers-6fdc77bbd4-", Namespace:"calico-system", SelfLink:"", UID:"6970f73b-f9db-4e4e-ace1-ad25d9704f47", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdc77bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb", Pod:"calico-kube-controllers-6fdc77bbd4-cflc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali322a4b2d395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.538 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.538 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" iface="eth0" netns="" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.538 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.538 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.570 [INFO][5102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.570 [INFO][5102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.570 [INFO][5102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.577 [WARNING][5102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.577 [INFO][5102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.579 [INFO][5102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.586153 containerd[1465]: 2025-11-01 00:18:15.582 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.586153 containerd[1465]: time="2025-11-01T00:18:15.586124964Z" level=info msg="TearDown network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" successfully" Nov 1 00:18:15.586662 containerd[1465]: time="2025-11-01T00:18:15.586167975Z" level=info msg="StopPodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" returns successfully" Nov 1 00:18:15.587025 containerd[1465]: time="2025-11-01T00:18:15.586841600Z" level=info msg="RemovePodSandbox for \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" Nov 1 00:18:15.587025 containerd[1465]: time="2025-11-01T00:18:15.586934875Z" level=info msg="Forcibly stopping sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\"" Nov 1 00:18:15.604365 containerd[1465]: time="2025-11-01T00:18:15.604170029Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:15.605422 containerd[1465]: time="2025-11-01T00:18:15.605387656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:15.605568 containerd[1465]: time="2025-11-01T00:18:15.605488967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:15.605847 kubelet[2591]: E1101 00:18:15.605780 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:15.606321 kubelet[2591]: E1101 00:18:15.605905 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:15.606321 kubelet[2591]: E1101 00:18:15.606251 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fdc77bbd4-cflc4_calico-system(6970f73b-f9db-4e4e-ace1-ad25d9704f47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:15.606909 containerd[1465]: time="2025-11-01T00:18:15.606849922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:15.607445 kubelet[2591]: E1101 00:18:15.607412 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.630 [WARNING][5121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0", GenerateName:"calico-kube-controllers-6fdc77bbd4-", Namespace:"calico-system", SelfLink:"", UID:"6970f73b-f9db-4e4e-ace1-ad25d9704f47", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdc77bbd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bff688a88ca25d6f2b8734b534149c5c4eaf57f0abacf51da06cd5c849b9f8eb", Pod:"calico-kube-controllers-6fdc77bbd4-cflc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali322a4b2d395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.631 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.631 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" iface="eth0" netns="" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.631 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.631 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.659 [INFO][5130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.659 [INFO][5130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.659 [INFO][5130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.670 [WARNING][5130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.670 [INFO][5130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" HandleID="k8s-pod-network.f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Workload="localhost-k8s-calico--kube--controllers--6fdc77bbd4--cflc4-eth0" Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.672 [INFO][5130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.679908 containerd[1465]: 2025-11-01 00:18:15.675 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c" Nov 1 00:18:15.679908 containerd[1465]: time="2025-11-01T00:18:15.679916978Z" level=info msg="TearDown network for sandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" successfully" Nov 1 00:18:15.689338 containerd[1465]: time="2025-11-01T00:18:15.689284519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:15.689338 containerd[1465]: time="2025-11-01T00:18:15.689336257Z" level=info msg="RemovePodSandbox \"f5bea1a04c42740f49855c9d5abbf5643c58db1fe16be7bbc17180d7f5c6f14c\" returns successfully" Nov 1 00:18:15.690006 containerd[1465]: time="2025-11-01T00:18:15.689951752Z" level=info msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.761 [WARNING][5148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390", Pod:"coredns-674b8bbfcf-b6lhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibead52a0638", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.761 [INFO][5148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.761 [INFO][5148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" iface="eth0" netns="" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.761 [INFO][5148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.761 [INFO][5148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.787 [INFO][5156] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.788 [INFO][5156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.788 [INFO][5156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.830 [WARNING][5156] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.830 [INFO][5156] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.832 [INFO][5156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.839123 containerd[1465]: 2025-11-01 00:18:15.835 [INFO][5148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.839123 containerd[1465]: time="2025-11-01T00:18:15.839068657Z" level=info msg="TearDown network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" successfully" Nov 1 00:18:15.839123 containerd[1465]: time="2025-11-01T00:18:15.839106007Z" level=info msg="StopPodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" returns successfully" Nov 1 00:18:15.839930 containerd[1465]: time="2025-11-01T00:18:15.839850856Z" level=info msg="RemovePodSandbox for \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" Nov 1 00:18:15.840003 containerd[1465]: time="2025-11-01T00:18:15.839937919Z" level=info msg="Forcibly stopping sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\"" Nov 1 00:18:15.907364 systemd[1]: Started sshd@13-10.0.0.38:22-10.0.0.1:45720.service - OpenSSH per-connection server daemon (10.0.0.1:45720). Nov 1 00:18:15.937981 containerd[1465]: time="2025-11-01T00:18:15.937925021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:15.939577 containerd[1465]: time="2025-11-01T00:18:15.939410331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.883 [WARNING][5174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d94b4da9-d4a7-4f92-8ec6-90e45ff748b8", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"565806ac10bc5bc4702b6ee28f187a433dc2d785154338bcae44791907e6d390", Pod:"coredns-674b8bbfcf-b6lhx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibead52a0638", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.883 [INFO][5174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.883 [INFO][5174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" iface="eth0" netns="" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.883 [INFO][5174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.883 [INFO][5174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.917 [INFO][5183] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.918 [INFO][5183] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.918 [INFO][5183] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.927 [WARNING][5183] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.927 [INFO][5183] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" HandleID="k8s-pod-network.dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Workload="localhost-k8s-coredns--674b8bbfcf--b6lhx-eth0" Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.929 [INFO][5183] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:15.939830 containerd[1465]: 2025-11-01 00:18:15.933 [INFO][5174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6" Nov 1 00:18:15.939830 containerd[1465]: time="2025-11-01T00:18:15.939646654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:15.940495 containerd[1465]: time="2025-11-01T00:18:15.939817596Z" level=info msg="TearDown network for sandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" successfully" Nov 1 00:18:15.940544 kubelet[2591]: E1101 00:18:15.940021 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:15.940544 kubelet[2591]: E1101 00:18:15.940117 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:15.940544 kubelet[2591]: E1101 00:18:15.940338 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gtbgf_calico-apiserver(d3f82561-0214-49cb-b635-63c7018b0ce5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:15.941598 kubelet[2591]: E1101 00:18:15.941562 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:15.974849 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 45720 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:15.977936 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:15.987303 systemd-logind[1452]: New session 14 of user core. Nov 1 00:18:15.995117 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:18:16.064616 containerd[1465]: time="2025-11-01T00:18:16.064441467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:16.065482 containerd[1465]: time="2025-11-01T00:18:16.065048848Z" level=info msg="RemovePodSandbox \"dc84472e20e4131a9c97021d7af376edcb8d6f2bebfc22fbf478b78fc1d6cea6\" returns successfully" Nov 1 00:18:16.066264 containerd[1465]: time="2025-11-01T00:18:16.065758362Z" level=info msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" Nov 1 00:18:16.208993 sshd[5189]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:16.225162 systemd[1]: sshd@13-10.0.0.38:22-10.0.0.1:45720.service: Deactivated successfully. Nov 1 00:18:16.229615 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:18:16.232487 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.184 [WARNING][5214] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" WorkloadEndpoint="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.184 [INFO][5214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.184 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" iface="eth0" netns="" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.184 [INFO][5214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.184 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.215 [INFO][5224] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.215 [INFO][5224] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.216 [INFO][5224] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.225 [WARNING][5224] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.225 [INFO][5224] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.227 [INFO][5224] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.235705 containerd[1465]: 2025-11-01 00:18:16.231 [INFO][5214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.236612 containerd[1465]: time="2025-11-01T00:18:16.235778019Z" level=info msg="TearDown network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" successfully" Nov 1 00:18:16.236612 containerd[1465]: time="2025-11-01T00:18:16.235818636Z" level=info msg="StopPodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" returns successfully" Nov 1 00:18:16.236612 containerd[1465]: time="2025-11-01T00:18:16.236588883Z" level=info msg="RemovePodSandbox for \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" Nov 1 00:18:16.236727 containerd[1465]: time="2025-11-01T00:18:16.236639728Z" level=info msg="Forcibly stopping sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\"" Nov 1 00:18:16.239848 systemd[1]: Started sshd@14-10.0.0.38:22-10.0.0.1:53918.service - OpenSSH per-connection server daemon (10.0.0.1:53918). Nov 1 00:18:16.241760 systemd-logind[1452]: Removed session 14. Nov 1 00:18:16.280888 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 53918 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:16.285074 containerd[1465]: time="2025-11-01T00:18:16.284356318Z" level=info msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\"" Nov 1 00:18:16.284616 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:16.296210 systemd-logind[1452]: New session 15 of user core. Nov 1 00:18:16.302132 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.286 [WARNING][5246] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" WorkloadEndpoint="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.286 [INFO][5246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.291 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" iface="eth0" netns="" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.292 [INFO][5246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.292 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.326 [INFO][5268] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.326 [INFO][5268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.327 [INFO][5268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.337 [WARNING][5268] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.338 [INFO][5268] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" HandleID="k8s-pod-network.16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Workload="localhost-k8s-whisker--99f944c66--9zxbf-eth0" Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.340 [INFO][5268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.347063 containerd[1465]: 2025-11-01 00:18:16.344 [INFO][5246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74" Nov 1 00:18:16.347800 containerd[1465]: time="2025-11-01T00:18:16.347443314Z" level=info msg="TearDown network for sandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" successfully" Nov 1 00:18:16.357800 containerd[1465]: time="2025-11-01T00:18:16.357720763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:16.357800 containerd[1465]: time="2025-11-01T00:18:16.357807756Z" level=info msg="RemovePodSandbox \"16f9e3f1b822ce5049d028491cd1fd5e7e61bb6934bfa1702af9081171806f74\" returns successfully" Nov 1 00:18:16.358502 containerd[1465]: time="2025-11-01T00:18:16.358457466Z" level=info msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.348 [INFO][5269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.349 [INFO][5269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" iface="eth0" netns="/var/run/netns/cni-d1717ff3-1bd3-8297-c923-6053f700f654" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.349 [INFO][5269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" iface="eth0" netns="/var/run/netns/cni-d1717ff3-1bd3-8297-c923-6053f700f654" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.349 [INFO][5269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" iface="eth0" netns="/var/run/netns/cni-d1717ff3-1bd3-8297-c923-6053f700f654" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.349 [INFO][5269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.349 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.378 [INFO][5285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" HandleID="k8s-pod-network.e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Workload="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.378 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.378 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.388 [WARNING][5285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" HandleID="k8s-pod-network.e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Workload="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.388 [INFO][5285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" HandleID="k8s-pod-network.e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Workload="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.390 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.397773 containerd[1465]: 2025-11-01 00:18:16.395 [INFO][5269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f" Nov 1 00:18:16.402661 containerd[1465]: time="2025-11-01T00:18:16.402331776Z" level=info msg="TearDown network for sandbox \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\" successfully" Nov 1 00:18:16.402661 containerd[1465]: time="2025-11-01T00:18:16.402383824Z" level=info msg="StopPodSandbox for \"e3fc1dbb6c03b6463c2a054d80d1dc9cb03e17a96cf035c6bc21f514d7d79c9f\" returns successfully" Nov 1 00:18:16.405635 systemd[1]: run-netns-cni\x2dd1717ff3\x2d1bd3\x2d8297\x2dc923\x2d6053f700f654.mount: Deactivated successfully. Nov 1 00:18:16.407152 containerd[1465]: time="2025-11-01T00:18:16.406226945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v865s,Uid:cddeab39-52b2-4e4d-8121-8c667fc57977,Namespace:calico-system,Attempt:1,}" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.411 [WARNING][5306] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604", Pod:"coredns-674b8bbfcf-jqnhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali660b501ece0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.411 [INFO][5306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.412 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" iface="eth0" netns="" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.412 [INFO][5306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.412 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.448 [INFO][5317] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.448 [INFO][5317] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.449 [INFO][5317] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.460 [WARNING][5317] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.460 [INFO][5317] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.462 [INFO][5317] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.472418 containerd[1465]: 2025-11-01 00:18:16.466 [INFO][5306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.472418 containerd[1465]: time="2025-11-01T00:18:16.472387204Z" level=info msg="TearDown network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" successfully" Nov 1 00:18:16.472418 containerd[1465]: time="2025-11-01T00:18:16.472413714Z" level=info msg="StopPodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" returns successfully" Nov 1 00:18:16.473695 containerd[1465]: time="2025-11-01T00:18:16.473671808Z" level=info msg="RemovePodSandbox for \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" Nov 1 00:18:16.473765 containerd[1465]: time="2025-11-01T00:18:16.473702506Z" level=info msg="Forcibly stopping sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\"" Nov 1 00:18:16.520184 sshd[5235]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:16.530095 systemd[1]: sshd@14-10.0.0.38:22-10.0.0.1:53918.service: Deactivated successfully. Nov 1 00:18:16.533463 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:18:16.535462 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:18:16.548903 systemd[1]: Started sshd@15-10.0.0.38:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Nov 1 00:18:16.553412 systemd-logind[1452]: Removed session 15. Nov 1 00:18:16.594280 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:16.596785 sshd[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:16.597317 systemd-networkd[1404]: calica034f0f6ea: Link UP Nov 1 00:18:16.601498 systemd-networkd[1404]: calica034f0f6ea: Gained carrier Nov 1 00:18:16.603946 systemd-logind[1452]: New session 16 of user core. Nov 1 00:18:16.609104 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.544 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6cfa1e13-d1c3-4a18-ab06-7a4f7444edb4", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"515c0cbaad28412eb62d52af9aa76b3f3103a2a8f19bd2ab6bc72bd234e83604", Pod:"coredns-674b8bbfcf-jqnhp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali660b501ece0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.547 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.547 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" iface="eth0" netns="" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.547 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.547 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.582 [INFO][5372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.582 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.587 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.596 [WARNING][5372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.596 [INFO][5372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" HandleID="k8s-pod-network.f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Workload="localhost-k8s-coredns--674b8bbfcf--jqnhp-eth0" Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.598 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.609756 containerd[1465]: 2025-11-01 00:18:16.605 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b" Nov 1 00:18:16.610715 containerd[1465]: time="2025-11-01T00:18:16.610006269Z" level=info msg="TearDown network for sandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" successfully" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.476 [INFO][5324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v865s-eth0 csi-node-driver- calico-system cddeab39-52b2-4e4d-8121-8c667fc57977 1241 0 2025-11-01 00:17:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v865s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calica034f0f6ea [] [] }} ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.476 [INFO][5324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.515 [INFO][5350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" HandleID="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Workload="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.515 [INFO][5350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" HandleID="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Workload="localhost-k8s-csi--node--driver--v865s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v865s", "timestamp":"2025-11-01 00:18:16.515305258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.515 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.515 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.515 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.530 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.547 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.560 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.571 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.573 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.573 [INFO][5350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.575 [INFO][5350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.579 [INFO][5350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.586 [INFO][5350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.587 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" host="localhost" Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.587 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.623040 containerd[1465]: 2025-11-01 00:18:16.587 [INFO][5350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" HandleID="k8s-pod-network.ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Workload="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.590 [INFO][5324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v865s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cddeab39-52b2-4e4d-8121-8c667fc57977", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v865s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica034f0f6ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.590 [INFO][5324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.590 [INFO][5324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica034f0f6ea ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.601 [INFO][5324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.606 [INFO][5324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v865s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cddeab39-52b2-4e4d-8121-8c667fc57977", ResourceVersion:"1241", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b", Pod:"csi-node-driver-v865s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica034f0f6ea", MAC:"8a:d1:33:aa:03:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.623780 containerd[1465]: 2025-11-01 00:18:16.618 [INFO][5324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b" Namespace="calico-system" Pod="csi-node-driver-v865s" WorkloadEndpoint="localhost-k8s-csi--node--driver--v865s-eth0" Nov 1 00:18:16.624950 containerd[1465]: time="2025-11-01T00:18:16.623593056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:16.624950 containerd[1465]: time="2025-11-01T00:18:16.624185249Z" level=info msg="RemovePodSandbox \"f0d60965a964f141531f2d245ca0af8801be2f4bfe016c978541c45b308ee17b\" returns successfully" Nov 1 00:18:16.624950 containerd[1465]: time="2025-11-01T00:18:16.624675519Z" level=info msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" Nov 1 00:18:16.645672 containerd[1465]: time="2025-11-01T00:18:16.645533077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:16.645672 containerd[1465]: time="2025-11-01T00:18:16.645599681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:16.645672 containerd[1465]: time="2025-11-01T00:18:16.645619178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:16.645943 containerd[1465]: time="2025-11-01T00:18:16.645720608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:16.682227 systemd[1]: Started cri-containerd-ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b.scope - libcontainer container ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b. Nov 1 00:18:16.701664 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:18:16.719637 containerd[1465]: time="2025-11-01T00:18:16.719496256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v865s,Uid:cddeab39-52b2-4e4d-8121-8c667fc57977,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab0fb321d25dae0cfcecf1f3c9492493b85070a19c600f0df990fd9fcd2c955b\"" Nov 1 00:18:16.723703 containerd[1465]: time="2025-11-01T00:18:16.722891946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.685 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2d948-d842-45c9-8a49-ba664ed2926c", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e", Pod:"calico-apiserver-84989fcb96-gd9wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54eb177f1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.686 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.686 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" iface="eth0" netns="" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.686 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.686 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.731 [INFO][5451] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.731 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.731 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.737 [WARNING][5451] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.737 [INFO][5451] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.739 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.745247 containerd[1465]: 2025-11-01 00:18:16.742 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.745247 containerd[1465]: time="2025-11-01T00:18:16.745085883Z" level=info msg="TearDown network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" successfully" Nov 1 00:18:16.745247 containerd[1465]: time="2025-11-01T00:18:16.745115989Z" level=info msg="StopPodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" returns successfully" Nov 1 00:18:16.746673 containerd[1465]: time="2025-11-01T00:18:16.746286709Z" level=info msg="RemovePodSandbox for \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" Nov 1 00:18:16.746673 containerd[1465]: time="2025-11-01T00:18:16.746320844Z" level=info msg="Forcibly stopping sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\"" Nov 1 00:18:16.779789 sshd[5368]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:16.784839 systemd[1]: sshd@15-10.0.0.38:22-10.0.0.1:53926.service: Deactivated successfully. Nov 1 00:18:16.788033 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:18:16.790164 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:18:16.791447 systemd-logind[1452]: Removed session 16. Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.792 [WARNING][5478] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3a2d948-d842-45c9-8a49-ba664ed2926c", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"858785b24400c9a215473d9c924cb9693cd6e1e3e877c03bcb68ec27b796ce4e", Pod:"calico-apiserver-84989fcb96-gd9wk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie54eb177f1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.793 [INFO][5478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.793 [INFO][5478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" iface="eth0" netns="" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.793 [INFO][5478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.793 [INFO][5478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.817 [INFO][5489] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.817 [INFO][5489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.817 [INFO][5489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.824 [WARNING][5489] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.824 [INFO][5489] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" HandleID="k8s-pod-network.242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Workload="localhost-k8s-calico--apiserver--84989fcb96--gd9wk-eth0" Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.825 [INFO][5489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.831354 containerd[1465]: 2025-11-01 00:18:16.828 [INFO][5478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7" Nov 1 00:18:16.831838 containerd[1465]: time="2025-11-01T00:18:16.831396812Z" level=info msg="TearDown network for sandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" successfully" Nov 1 00:18:16.836241 containerd[1465]: time="2025-11-01T00:18:16.836186571Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:16.836371 containerd[1465]: time="2025-11-01T00:18:16.836249429Z" level=info msg="RemovePodSandbox \"242a85ed5f0950f89cb9254e763327fd5ef843a73959f02f1deaf39930ba60b7\" returns successfully" Nov 1 00:18:16.836710 containerd[1465]: time="2025-11-01T00:18:16.836683875Z" level=info msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.875 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f82561-0214-49cb-b635-63c7018b0ce5", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204", Pod:"calico-apiserver-84989fcb96-gtbgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89476d61464", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.875 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.875 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" iface="eth0" netns="" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.875 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.875 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.902 [INFO][5516] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.902 [INFO][5516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.902 [INFO][5516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.909 [WARNING][5516] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.909 [INFO][5516] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.910 [INFO][5516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.916618 containerd[1465]: 2025-11-01 00:18:16.913 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.917269 containerd[1465]: time="2025-11-01T00:18:16.917229361Z" level=info msg="TearDown network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" successfully" Nov 1 00:18:16.917269 containerd[1465]: time="2025-11-01T00:18:16.917263816Z" level=info msg="StopPodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" returns successfully" Nov 1 00:18:16.917850 containerd[1465]: time="2025-11-01T00:18:16.917818939Z" level=info msg="RemovePodSandbox for \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" Nov 1 00:18:16.917944 containerd[1465]: time="2025-11-01T00:18:16.917894512Z" level=info msg="Forcibly stopping sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\"" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.954 [WARNING][5534] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0", GenerateName:"calico-apiserver-84989fcb96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3f82561-0214-49cb-b635-63c7018b0ce5", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84989fcb96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0f9fb859197341a06c207730a66a0a1443ff54f354a9ffdee7033186e0c5204", Pod:"calico-apiserver-84989fcb96-gtbgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89476d61464", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.955 [INFO][5534] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.955 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" iface="eth0" netns="" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.955 [INFO][5534] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.955 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.977 [INFO][5544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.978 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.978 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.985 [WARNING][5544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.985 [INFO][5544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" HandleID="k8s-pod-network.c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Workload="localhost-k8s-calico--apiserver--84989fcb96--gtbgf-eth0" Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.987 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:18:16.993821 containerd[1465]: 2025-11-01 00:18:16.990 [INFO][5534] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375" Nov 1 00:18:16.994381 containerd[1465]: time="2025-11-01T00:18:16.993868510Z" level=info msg="TearDown network for sandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" successfully" Nov 1 00:18:16.998186 containerd[1465]: time="2025-11-01T00:18:16.998149231Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:18:16.998288 containerd[1465]: time="2025-11-01T00:18:16.998211669Z" level=info msg="RemovePodSandbox \"c700bed74ffbeb35507d463e0bbbc2eb14e6f5bb56a9f5065ac1229ff8c92375\" returns successfully" Nov 1 00:18:17.036961 containerd[1465]: time="2025-11-01T00:18:17.036885511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:17.038409 containerd[1465]: time="2025-11-01T00:18:17.038364532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:17.038483 containerd[1465]: time="2025-11-01T00:18:17.038440665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:17.038737 kubelet[2591]: E1101 00:18:17.038690 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:17.039295 kubelet[2591]: E1101 00:18:17.038754 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:17.039295 kubelet[2591]: E1101 00:18:17.038967 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:17.041098 containerd[1465]: time="2025-11-01T00:18:17.041059136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:17.361463 containerd[1465]: time="2025-11-01T00:18:17.361246350Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:17.362966 containerd[1465]: time="2025-11-01T00:18:17.362896772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:17.363057 containerd[1465]: time="2025-11-01T00:18:17.362964781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:17.363283 kubelet[2591]: E1101 00:18:17.363223 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:17.363370 kubelet[2591]: E1101 00:18:17.363295 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:17.363591 kubelet[2591]: E1101 00:18:17.363457 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:17.364801 kubelet[2591]: E1101 00:18:17.364755 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:17.776891 kubelet[2591]: E1101 00:18:17.776757 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:18.284339 containerd[1465]: time="2025-11-01T00:18:18.284282150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:18.313252 systemd-networkd[1404]: calica034f0f6ea: Gained IPv6LL Nov 1 00:18:18.705729 containerd[1465]: time="2025-11-01T00:18:18.705568014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:18.733269 containerd[1465]: time="2025-11-01T00:18:18.733160695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:18.733498 containerd[1465]: time="2025-11-01T00:18:18.733221699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:18.733628 kubelet[2591]: E1101 00:18:18.733568 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:18.734152 kubelet[2591]: E1101 00:18:18.733640 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:18.734152 kubelet[2591]: E1101 00:18:18.733829 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ltj5c_calico-system(ab5a5667-f558-4d28-9b68-0d3dbc43d636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:18.735134 kubelet[2591]: E1101 00:18:18.735071 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:21.795303 systemd[1]: Started sshd@16-10.0.0.38:22-10.0.0.1:53930.service - OpenSSH per-connection server daemon (10.0.0.1:53930). Nov 1 00:18:21.837818 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 53930 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:21.839816 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:21.844717 systemd-logind[1452]: New session 17 of user core. Nov 1 00:18:21.856058 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:18:21.984229 sshd[5558]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:21.989539 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:18:21.990378 systemd[1]: sshd@16-10.0.0.38:22-10.0.0.1:53930.service: Deactivated successfully. Nov 1 00:18:21.994304 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:18:21.995317 systemd-logind[1452]: Removed session 17. Nov 1 00:18:22.284219 containerd[1465]: time="2025-11-01T00:18:22.284166753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:22.603989 containerd[1465]: time="2025-11-01T00:18:22.603775540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:22.605192 containerd[1465]: time="2025-11-01T00:18:22.605135282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:22.605243 containerd[1465]: time="2025-11-01T00:18:22.605202238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:22.605481 kubelet[2591]: E1101 00:18:22.605419 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:22.605840 kubelet[2591]: E1101 00:18:22.605491 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:22.605840 kubelet[2591]: E1101 00:18:22.605651 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d083028835b047a397c3176e571d04eb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:22.607560 containerd[1465]: time="2025-11-01T00:18:22.607531766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:22.922849 containerd[1465]: time="2025-11-01T00:18:22.922691203Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:22.924634 containerd[1465]: time="2025-11-01T00:18:22.924592344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:22.924730 containerd[1465]: time="2025-11-01T00:18:22.924671694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:22.924905 kubelet[2591]: E1101 00:18:22.924833 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:22.925193 kubelet[2591]: E1101 00:18:22.924923 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:22.925193 kubelet[2591]: E1101 00:18:22.925082 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:22.926275 kubelet[2591]: E1101 00:18:22.926240 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cfbf4bb6d-hrb7l" podUID="1d99ca26-0cda-4b97-b45f-ad18f38bfeae" Nov 1 00:18:23.283523 containerd[1465]: time="2025-11-01T00:18:23.283465066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:23.633835 containerd[1465]: time="2025-11-01T00:18:23.633644352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:23.635074 containerd[1465]: time="2025-11-01T00:18:23.634997092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:23.635123 containerd[1465]: time="2025-11-01T00:18:23.635045703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:23.635340 kubelet[2591]: E1101 00:18:23.635294 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:23.635671 kubelet[2591]: E1101 00:18:23.635354 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:23.635671 kubelet[2591]: E1101 00:18:23.635606 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2zg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gd9wk_calico-apiserver(d3a2d948-d842-45c9-8a49-ba664ed2926c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:23.636848 kubelet[2591]: E1101 00:18:23.636806 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:27.003948 systemd[1]: Started sshd@17-10.0.0.38:22-10.0.0.1:58338.service - OpenSSH per-connection server daemon (10.0.0.1:58338). Nov 1 00:18:27.042922 sshd[5584]: Accepted publickey for core from 10.0.0.1 port 58338 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:27.045058 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:27.049664 systemd-logind[1452]: New session 18 of user core. Nov 1 00:18:27.059053 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:18:27.177121 sshd[5584]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:27.181212 systemd[1]: sshd@17-10.0.0.38:22-10.0.0.1:58338.service: Deactivated successfully. Nov 1 00:18:27.183644 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:18:27.184419 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:18:27.185256 systemd-logind[1452]: Removed session 18. Nov 1 00:18:29.287650 containerd[1465]: time="2025-11-01T00:18:29.287558642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:29.621069 containerd[1465]: time="2025-11-01T00:18:29.620808087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:29.622457 containerd[1465]: time="2025-11-01T00:18:29.622387089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:29.622502 containerd[1465]: time="2025-11-01T00:18:29.622470116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:29.622772 kubelet[2591]: E1101 00:18:29.622683 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:29.622772 kubelet[2591]: E1101 00:18:29.622758 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:29.623472 kubelet[2591]: E1101 00:18:29.622963 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:29.624992 containerd[1465]: time="2025-11-01T00:18:29.624961993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:30.004593 containerd[1465]: time="2025-11-01T00:18:30.004506931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:30.029698 containerd[1465]: time="2025-11-01T00:18:30.029567339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:30.029953 containerd[1465]: time="2025-11-01T00:18:30.029649735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:30.030027 kubelet[2591]: E1101 00:18:30.029896 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:30.030027 kubelet[2591]: E1101 00:18:30.029972 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:30.030308 kubelet[2591]: E1101 00:18:30.030187 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:30.031516 kubelet[2591]: E1101 00:18:30.031443 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:30.284543 kubelet[2591]: E1101 00:18:30.284357 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:31.283046 kubelet[2591]: E1101 00:18:31.282848 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:31.284016 kubelet[2591]: E1101 00:18:31.283891 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:31.284226 kubelet[2591]: E1101 00:18:31.284088 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:32.193853 systemd[1]: Started sshd@18-10.0.0.38:22-10.0.0.1:58346.service - OpenSSH per-connection server daemon (10.0.0.1:58346). Nov 1 00:18:32.234874 sshd[5600]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:32.236939 sshd[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:32.243415 systemd-logind[1452]: New session 19 of user core. Nov 1 00:18:32.255103 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:18:32.379533 sshd[5600]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:32.384731 systemd[1]: sshd@18-10.0.0.38:22-10.0.0.1:58346.service: Deactivated successfully. Nov 1 00:18:32.387251 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:18:32.388204 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:18:32.389449 systemd-logind[1452]: Removed session 19. Nov 1 00:18:33.718124 kubelet[2591]: E1101 00:18:33.718086 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:35.285917 kubelet[2591]: E1101 00:18:35.285214 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cfbf4bb6d-hrb7l" podUID="1d99ca26-0cda-4b97-b45f-ad18f38bfeae" Nov 1 00:18:36.284531 kubelet[2591]: E1101 00:18:36.284466 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:37.283773 kubelet[2591]: E1101 00:18:37.283693 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:37.391718 systemd[1]: Started sshd@19-10.0.0.38:22-10.0.0.1:43656.service - OpenSSH per-connection server daemon (10.0.0.1:43656). Nov 1 00:18:37.439775 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 43656 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:37.442009 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:37.447740 systemd-logind[1452]: New session 20 of user core. Nov 1 00:18:37.456056 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:18:37.610439 sshd[5640]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:37.618360 systemd[1]: sshd@19-10.0.0.38:22-10.0.0.1:43656.service: Deactivated successfully. Nov 1 00:18:37.621682 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:18:37.622580 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:18:37.623677 systemd-logind[1452]: Removed session 20. Nov 1 00:18:41.284631 kubelet[2591]: E1101 00:18:41.284538 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:42.627774 systemd[1]: Started sshd@20-10.0.0.38:22-10.0.0.1:43660.service - OpenSSH per-connection server daemon (10.0.0.1:43660). Nov 1 00:18:42.665440 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 43660 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:42.667222 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:42.671608 systemd-logind[1452]: New session 21 of user core. Nov 1 00:18:42.686056 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:18:42.805643 sshd[5655]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:42.816239 systemd[1]: sshd@20-10.0.0.38:22-10.0.0.1:43660.service: Deactivated successfully. Nov 1 00:18:42.818643 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:18:42.820459 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:18:42.828144 systemd[1]: Started sshd@21-10.0.0.38:22-10.0.0.1:43674.service - OpenSSH per-connection server daemon (10.0.0.1:43674). Nov 1 00:18:42.829425 systemd-logind[1452]: Removed session 21. Nov 1 00:18:42.860345 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 43674 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:42.862072 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:42.866502 systemd-logind[1452]: New session 22 of user core. Nov 1 00:18:42.881089 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:18:43.204591 sshd[5670]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:43.213247 systemd[1]: sshd@21-10.0.0.38:22-10.0.0.1:43674.service: Deactivated successfully. Nov 1 00:18:43.215427 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:18:43.217358 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:18:43.227204 systemd[1]: Started sshd@22-10.0.0.38:22-10.0.0.1:43690.service - OpenSSH per-connection server daemon (10.0.0.1:43690). Nov 1 00:18:43.228580 systemd-logind[1452]: Removed session 22. Nov 1 00:18:43.264097 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 43690 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:43.265759 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:43.269994 systemd-logind[1452]: New session 23 of user core. Nov 1 00:18:43.281021 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:18:43.284446 containerd[1465]: time="2025-11-01T00:18:43.284143986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:18:43.600077 containerd[1465]: time="2025-11-01T00:18:43.600018688Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:43.603190 containerd[1465]: time="2025-11-01T00:18:43.603130527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:18:43.603302 containerd[1465]: time="2025-11-01T00:18:43.603238472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:43.603494 kubelet[2591]: E1101 00:18:43.603427 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:43.604070 kubelet[2591]: E1101 00:18:43.603498 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:18:43.604070 kubelet[2591]: E1101 00:18:43.603774 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnwn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6fdc77bbd4-cflc4_calico-system(6970f73b-f9db-4e4e-ace1-ad25d9704f47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:43.604240 containerd[1465]: time="2025-11-01T00:18:43.603841795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:18:43.605646 kubelet[2591]: E1101 00:18:43.605602 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:43.904270 sshd[5682]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:43.917748 systemd[1]: sshd@22-10.0.0.38:22-10.0.0.1:43690.service: Deactivated successfully. Nov 1 00:18:43.921530 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:18:43.922652 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:18:43.929455 systemd[1]: Started sshd@23-10.0.0.38:22-10.0.0.1:43706.service - OpenSSH per-connection server daemon (10.0.0.1:43706). Nov 1 00:18:43.930668 systemd-logind[1452]: Removed session 23. Nov 1 00:18:43.969752 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 43706 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:43.970217 containerd[1465]: time="2025-11-01T00:18:43.969911575Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:43.971440 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:43.975372 systemd-logind[1452]: New session 24 of user core. Nov 1 00:18:43.982021 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:18:44.095806 containerd[1465]: time="2025-11-01T00:18:44.095679567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:18:44.096026 containerd[1465]: time="2025-11-01T00:18:44.095733259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:44.096117 kubelet[2591]: E1101 00:18:44.096061 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:44.096261 kubelet[2591]: E1101 00:18:44.096131 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:18:44.096392 kubelet[2591]: E1101 00:18:44.096323 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c4ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ltj5c_calico-system(ab5a5667-f558-4d28-9b68-0d3dbc43d636): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:44.098066 kubelet[2591]: E1101 00:18:44.097935 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:44.234072 sshd[5705]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:44.242695 systemd[1]: sshd@23-10.0.0.38:22-10.0.0.1:43706.service: Deactivated successfully. Nov 1 00:18:44.244738 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:18:44.248232 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:18:44.257452 systemd[1]: Started sshd@24-10.0.0.38:22-10.0.0.1:43714.service - OpenSSH per-connection server daemon (10.0.0.1:43714). Nov 1 00:18:44.258896 systemd-logind[1452]: Removed session 24. Nov 1 00:18:44.283105 kubelet[2591]: E1101 00:18:44.283058 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:44.290998 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 43714 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:44.292837 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:44.298085 systemd-logind[1452]: New session 25 of user core. Nov 1 00:18:44.310096 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:18:44.421833 sshd[5718]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:44.426628 systemd[1]: sshd@24-10.0.0.38:22-10.0.0.1:43714.service: Deactivated successfully. Nov 1 00:18:44.429128 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:18:44.429804 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:18:44.431448 systemd-logind[1452]: Removed session 25. Nov 1 00:18:45.284873 containerd[1465]: time="2025-11-01T00:18:45.284503500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:45.611331 containerd[1465]: time="2025-11-01T00:18:45.611164431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:45.612490 containerd[1465]: time="2025-11-01T00:18:45.612427878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:45.612582 containerd[1465]: time="2025-11-01T00:18:45.612494303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:45.612741 kubelet[2591]: E1101 00:18:45.612687 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:45.613182 kubelet[2591]: E1101 00:18:45.612754 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:45.613182 kubelet[2591]: E1101 00:18:45.612930 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96ns7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gtbgf_calico-apiserver(d3f82561-0214-49cb-b635-63c7018b0ce5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:45.614141 kubelet[2591]: E1101 00:18:45.614102 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:48.284374 containerd[1465]: time="2025-11-01T00:18:48.284309394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:18:48.649606 containerd[1465]: time="2025-11-01T00:18:48.649422971Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:48.651132 containerd[1465]: time="2025-11-01T00:18:48.651085075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:18:48.651194 containerd[1465]: time="2025-11-01T00:18:48.651140370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:18:48.651446 kubelet[2591]: E1101 00:18:48.651393 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:48.651847 kubelet[2591]: E1101 00:18:48.651466 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:18:48.651847 kubelet[2591]: E1101 00:18:48.651761 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d083028835b047a397c3176e571d04eb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:48.652075 containerd[1465]: time="2025-11-01T00:18:48.651971127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:18:48.976832 containerd[1465]: time="2025-11-01T00:18:48.976769432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:48.978227 containerd[1465]: time="2025-11-01T00:18:48.978172084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:18:48.978305 containerd[1465]: time="2025-11-01T00:18:48.978231176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:18:48.978501 kubelet[2591]: E1101 00:18:48.978449 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:48.978564 kubelet[2591]: E1101 00:18:48.978514 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:18:48.978976 containerd[1465]: time="2025-11-01T00:18:48.978933368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:18:48.979029 kubelet[2591]: E1101 00:18:48.978898 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2zg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84989fcb96-gd9wk_calico-apiserver(d3a2d948-d842-45c9-8a49-ba664ed2926c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:48.980447 kubelet[2591]: E1101 00:18:48.980374 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c" Nov 1 00:18:49.283283 kubelet[2591]: E1101 00:18:49.283111 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:49.314953 containerd[1465]: time="2025-11-01T00:18:49.314850368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:49.316220 containerd[1465]: time="2025-11-01T00:18:49.316170864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:18:49.316320 containerd[1465]: time="2025-11-01T00:18:49.316231940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:18:49.316520 kubelet[2591]: E1101 00:18:49.316472 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:49.316588 kubelet[2591]: E1101 00:18:49.316542 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:18:49.316807 kubelet[2591]: E1101 00:18:49.316744 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwmm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-cfbf4bb6d-hrb7l_calico-system(1d99ca26-0cda-4b97-b45f-ad18f38bfeae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:49.317978 kubelet[2591]: E1101 00:18:49.317933 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-cfbf4bb6d-hrb7l" podUID="1d99ca26-0cda-4b97-b45f-ad18f38bfeae" Nov 1 00:18:49.434258 systemd[1]: Started sshd@25-10.0.0.38:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Nov 1 00:18:49.472998 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:49.474686 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:49.478944 systemd-logind[1452]: New session 26 of user core. Nov 1 00:18:49.494989 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:18:49.612466 sshd[5741]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:49.618726 systemd[1]: sshd@25-10.0.0.38:22-10.0.0.1:58274.service: Deactivated successfully. Nov 1 00:18:49.621576 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:18:49.622761 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:18:49.624650 systemd-logind[1452]: Removed session 26. Nov 1 00:18:52.282764 kubelet[2591]: E1101 00:18:52.282253 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:18:53.284565 containerd[1465]: time="2025-11-01T00:18:53.284487551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:18:53.652021 containerd[1465]: time="2025-11-01T00:18:53.651814485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:53.653223 containerd[1465]: time="2025-11-01T00:18:53.653164349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:18:53.653442 containerd[1465]: time="2025-11-01T00:18:53.653239842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:18:53.653491 kubelet[2591]: E1101 00:18:53.653425 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:53.653977 kubelet[2591]: E1101 00:18:53.653497 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:18:53.653977 kubelet[2591]: E1101 00:18:53.653663 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:53.655873 containerd[1465]: time="2025-11-01T00:18:53.655828799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:18:53.971444 containerd[1465]: time="2025-11-01T00:18:53.970990893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:18:53.972928 containerd[1465]: time="2025-11-01T00:18:53.972780141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:18:53.973094 containerd[1465]: time="2025-11-01T00:18:53.972952238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:18:53.973357 kubelet[2591]: E1101 00:18:53.973272 2591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:53.973357 kubelet[2591]: E1101 00:18:53.973364 2591 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:18:53.973569 kubelet[2591]: E1101 00:18:53.973527 2591 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5vh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-v865s_calico-system(cddeab39-52b2-4e4d-8121-8c667fc57977): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:18:53.974944 kubelet[2591]: E1101 00:18:53.974830 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v865s" podUID="cddeab39-52b2-4e4d-8121-8c667fc57977" Nov 1 00:18:54.283776 kubelet[2591]: E1101 00:18:54.283679 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6fdc77bbd4-cflc4" podUID="6970f73b-f9db-4e4e-ace1-ad25d9704f47" Nov 1 00:18:54.624770 systemd[1]: Started sshd@26-10.0.0.38:22-10.0.0.1:58276.service - OpenSSH per-connection server daemon (10.0.0.1:58276). Nov 1 00:18:54.676327 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 58276 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:54.678502 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:54.683072 systemd-logind[1452]: New session 27 of user core. Nov 1 00:18:54.690022 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:18:54.838907 sshd[5757]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:54.843549 systemd[1]: sshd@26-10.0.0.38:22-10.0.0.1:58276.service: Deactivated successfully. Nov 1 00:18:54.846202 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:18:54.846934 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:18:54.847979 systemd-logind[1452]: Removed session 27. Nov 1 00:18:56.283569 kubelet[2591]: E1101 00:18:56.283512 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gtbgf" podUID="d3f82561-0214-49cb-b635-63c7018b0ce5" Nov 1 00:18:58.283479 kubelet[2591]: E1101 00:18:58.283420 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ltj5c" podUID="ab5a5667-f558-4d28-9b68-0d3dbc43d636" Nov 1 00:18:59.852167 systemd[1]: Started sshd@27-10.0.0.38:22-10.0.0.1:52170.service - OpenSSH per-connection server daemon (10.0.0.1:52170). Nov 1 00:18:59.895542 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 52170 ssh2: RSA SHA256:sP9eTyII4+k60hMksMjREtSLjH2AZmH9OExd1QyZACg Nov 1 00:18:59.897769 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:18:59.902621 systemd-logind[1452]: New session 28 of user core. Nov 1 00:18:59.909120 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 1 00:19:00.076894 sshd[5774]: pam_unix(sshd:session): session closed for user core Nov 1 00:19:00.081137 systemd[1]: sshd@27-10.0.0.38:22-10.0.0.1:52170.service: Deactivated successfully. Nov 1 00:19:00.083821 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:19:00.084628 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:19:00.085705 systemd-logind[1452]: Removed session 28. Nov 1 00:19:01.284317 kubelet[2591]: E1101 00:19:01.283793 2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84989fcb96-gd9wk" podUID="d3a2d948-d842-45c9-8a49-ba664ed2926c"