Oct 31 00:43:08.958610 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 00:43:08.958632 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:43:08.958643 kernel: BIOS-provided physical RAM map: Oct 31 00:43:08.958649 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 00:43:08.958655 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 00:43:08.958662 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 00:43:08.958669 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 31 00:43:08.958675 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 31 00:43:08.958681 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 00:43:08.958690 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 00:43:08.958696 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 00:43:08.958703 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 00:43:08.958713 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 00:43:08.958719 kernel: NX (Execute Disable) protection: active Oct 31 00:43:08.958727 kernel: APIC: Static calls initialized Oct 31 00:43:08.958739 kernel: SMBIOS 2.8 present. Oct 31 00:43:08.958746 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 31 00:43:08.958753 kernel: Hypervisor detected: KVM Oct 31 00:43:08.958759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 00:43:08.958766 kernel: kvm-clock: using sched offset of 3616125786 cycles Oct 31 00:43:08.958773 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 00:43:08.958781 kernel: tsc: Detected 2794.748 MHz processor Oct 31 00:43:08.958788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 00:43:08.958795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 00:43:08.958802 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 31 00:43:08.958812 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 31 00:43:08.958819 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 00:43:08.958826 kernel: Using GB pages for direct mapping Oct 31 00:43:08.958833 kernel: ACPI: Early table checksum verification disabled Oct 31 00:43:08.958840 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 31 00:43:08.958847 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958854 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958861 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958882 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 31 00:43:08.958890 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958897 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958904 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958912 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 00:43:08.958919 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 31 00:43:08.958926 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 31 00:43:08.958938 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 31 00:43:08.958948 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 31 00:43:08.958956 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 31 00:43:08.958963 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 31 00:43:08.958971 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 31 00:43:08.958978 kernel: No NUMA configuration found Oct 31 00:43:08.958985 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 31 00:43:08.958993 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 31 00:43:08.959003 kernel: Zone ranges: Oct 31 00:43:08.959011 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 00:43:08.959018 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 31 00:43:08.959026 kernel: Normal empty Oct 31 00:43:08.959033 kernel: Movable zone start for each node Oct 31 00:43:08.959041 kernel: Early memory node ranges Oct 31 00:43:08.959048 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 00:43:08.959056 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 31 00:43:08.959063 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 31 00:43:08.959073 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 00:43:08.959084 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 00:43:08.959091 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 31 00:43:08.959098 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 00:43:08.959105 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 00:43:08.959112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 00:43:08.959120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 00:43:08.959127 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 00:43:08.959134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 00:43:08.959143 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 00:43:08.959151 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 00:43:08.959158 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 00:43:08.959165 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 00:43:08.959172 kernel: TSC deadline timer available Oct 31 00:43:08.959179 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 00:43:08.959186 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 00:43:08.959194 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 00:43:08.959203 kernel: kvm-guest: setup PV sched yield Oct 31 00:43:08.959213 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 00:43:08.959220 kernel: Booting paravirtualized kernel on KVM Oct 31 00:43:08.959227 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 00:43:08.959235 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 00:43:08.959242 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Oct 31 00:43:08.959249 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Oct 31 00:43:08.959257 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 00:43:08.959264 kernel: kvm-guest: PV spinlocks enabled Oct 31 00:43:08.959271 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 00:43:08.959282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:43:08.959289 kernel: random: crng init done Oct 31 00:43:08.959296 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 00:43:08.959304 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 00:43:08.959311 kernel: Fallback order for Node 0: 0 Oct 31 00:43:08.959318 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 31 00:43:08.959325 kernel: Policy zone: DMA32 Oct 31 00:43:08.959332 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 00:43:08.959340 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Oct 31 00:43:08.959357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 00:43:08.959365 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 00:43:08.959373 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 00:43:08.959380 kernel: Dynamic Preempt: voluntary Oct 31 00:43:08.959387 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 00:43:08.959398 kernel: rcu: RCU event tracing is enabled. Oct 31 00:43:08.959405 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 00:43:08.959413 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 00:43:08.959420 kernel: Rude variant of Tasks RCU enabled. Oct 31 00:43:08.959430 kernel: Tracing variant of Tasks RCU enabled. Oct 31 00:43:08.959437 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 00:43:08.959445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 00:43:08.959454 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 00:43:08.959462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 00:43:08.959469 kernel: Console: colour VGA+ 80x25 Oct 31 00:43:08.959476 kernel: printk: console [ttyS0] enabled Oct 31 00:43:08.959483 kernel: ACPI: Core revision 20230628 Oct 31 00:43:08.959490 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 00:43:08.959500 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 00:43:08.959507 kernel: x2apic enabled Oct 31 00:43:08.959515 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 00:43:08.959522 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 00:43:08.959529 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 00:43:08.959537 kernel: kvm-guest: setup PV IPIs Oct 31 00:43:08.959544 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 00:43:08.959562 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 00:43:08.959569 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 00:43:08.959577 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 00:43:08.959585 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 00:43:08.959595 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 00:43:08.959602 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 00:43:08.959610 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 00:43:08.959618 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 00:43:08.959625 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 00:43:08.959635 kernel: active return thunk: retbleed_return_thunk Oct 31 00:43:08.959643 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 00:43:08.959653 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 00:43:08.959661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 00:43:08.959669 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 00:43:08.959677 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 00:43:08.959684 kernel: active return thunk: srso_return_thunk Oct 31 00:43:08.959692 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 00:43:08.959700 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 00:43:08.959710 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 00:43:08.959718 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 00:43:08.959725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 00:43:08.959733 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 00:43:08.959741 kernel: Freeing SMP alternatives memory: 32K Oct 31 00:43:08.959748 kernel: pid_max: default: 32768 minimum: 301 Oct 31 00:43:08.959756 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 00:43:08.959763 kernel: landlock: Up and running. Oct 31 00:43:08.959771 kernel: SELinux: Initializing. Oct 31 00:43:08.959781 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:43:08.959789 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 00:43:08.959796 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 00:43:08.959804 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:43:08.959812 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:43:08.959820 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 00:43:08.959827 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 00:43:08.959837 kernel: ... version: 0 Oct 31 00:43:08.959848 kernel: ... bit width: 48 Oct 31 00:43:08.959855 kernel: ... generic registers: 6 Oct 31 00:43:08.959863 kernel: ... value mask: 0000ffffffffffff Oct 31 00:43:08.959881 kernel: ... max period: 00007fffffffffff Oct 31 00:43:08.959889 kernel: ... fixed-purpose events: 0 Oct 31 00:43:08.959897 kernel: ... event mask: 000000000000003f Oct 31 00:43:08.959904 kernel: signal: max sigframe size: 1776 Oct 31 00:43:08.959912 kernel: rcu: Hierarchical SRCU implementation. Oct 31 00:43:08.959919 kernel: rcu: Max phase no-delay instances is 400. Oct 31 00:43:08.959927 kernel: smp: Bringing up secondary CPUs ... Oct 31 00:43:08.959937 kernel: smpboot: x86: Booting SMP configuration: Oct 31 00:43:08.959945 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 00:43:08.959952 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 00:43:08.959960 kernel: smpboot: Max logical packages: 1 Oct 31 00:43:08.959967 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 00:43:08.959975 kernel: devtmpfs: initialized Oct 31 00:43:08.959983 kernel: x86/mm: Memory block size: 128MB Oct 31 00:43:08.959991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 00:43:08.959998 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 00:43:08.960008 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 00:43:08.960016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 00:43:08.960024 kernel: audit: initializing netlink subsys (disabled) Oct 31 00:43:08.960031 kernel: audit: type=2000 audit(1761871388.116:1): state=initialized audit_enabled=0 res=1 Oct 31 00:43:08.960039 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 00:43:08.960046 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 00:43:08.960054 kernel: cpuidle: using governor menu Oct 31 00:43:08.960062 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 00:43:08.960069 kernel: dca service started, version 1.12.1 Oct 31 00:43:08.960080 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 00:43:08.960087 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 00:43:08.960095 kernel: PCI: Using configuration type 1 for base access Oct 31 00:43:08.960103 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 00:43:08.960110 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 00:43:08.960118 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 00:43:08.960125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 00:43:08.960136 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 00:43:08.960146 kernel: ACPI: Added _OSI(Module Device) Oct 31 00:43:08.960154 kernel: ACPI: Added _OSI(Processor Device) Oct 31 00:43:08.960161 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 00:43:08.960169 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 00:43:08.960176 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 00:43:08.960184 kernel: ACPI: Interpreter enabled Oct 31 00:43:08.960191 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 00:43:08.960199 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 00:43:08.960206 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 00:43:08.960214 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 00:43:08.960224 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 00:43:08.960232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 00:43:08.960463 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 00:43:08.960604 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 00:43:08.960733 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 00:43:08.960743 kernel: PCI host bridge to bus 0000:00 Oct 31 00:43:08.960913 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 00:43:08.961044 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 00:43:08.961162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 00:43:08.961280 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 00:43:08.961407 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 00:43:08.961525 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 31 00:43:08.961642 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 00:43:08.961805 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 00:43:08.962056 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 00:43:08.962189 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 31 00:43:08.962319 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 31 00:43:08.962460 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 31 00:43:08.962589 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 00:43:08.962735 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 00:43:08.962884 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 00:43:08.963020 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 31 00:43:08.963149 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 31 00:43:08.963298 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 00:43:08.963437 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 00:43:08.963567 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 31 00:43:08.963695 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 31 00:43:08.963846 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 00:43:08.964008 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 31 00:43:08.964137 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 31 00:43:08.964263 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 31 00:43:08.964407 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 31 00:43:08.964552 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 00:43:08.964680 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 00:43:08.964829 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 00:43:08.965036 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 31 00:43:08.965172 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 31 00:43:08.965315 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 00:43:08.965450 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 00:43:08.965461 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 00:43:08.965473 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 00:43:08.965481 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 00:43:08.965489 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 00:43:08.965497 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 00:43:08.965504 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 00:43:08.965512 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 00:43:08.965520 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 00:43:08.965528 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 00:43:08.965535 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 00:43:08.965546 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 00:43:08.965553 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 00:43:08.965561 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 00:43:08.965569 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 00:43:08.965576 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 00:43:08.965584 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 00:43:08.965592 kernel: iommu: Default domain type: Translated Oct 31 00:43:08.965599 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 00:43:08.965607 kernel: PCI: Using ACPI for IRQ routing Oct 31 00:43:08.965617 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 00:43:08.965625 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 00:43:08.965632 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 31 00:43:08.965759 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 00:43:08.965900 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 00:43:08.966029 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 00:43:08.966039 kernel: vgaarb: loaded Oct 31 00:43:08.966047 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 00:43:08.966055 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 00:43:08.966067 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 00:43:08.966075 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 00:43:08.966082 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 00:43:08.966090 kernel: pnp: PnP ACPI init Oct 31 00:43:08.966254 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 00:43:08.966265 kernel: pnp: PnP ACPI: found 6 devices Oct 31 00:43:08.966274 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 00:43:08.966281 kernel: NET: Registered PF_INET protocol family Oct 31 00:43:08.966293 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 00:43:08.966301 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 00:43:08.966309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 00:43:08.966316 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 00:43:08.966324 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 00:43:08.966332 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 00:43:08.966340 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:43:08.966363 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 00:43:08.966371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 00:43:08.966382 kernel: NET: Registered PF_XDP protocol family Oct 31 00:43:08.966504 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 00:43:08.966621 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 00:43:08.966738 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 00:43:08.966890 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 00:43:08.967014 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 00:43:08.967146 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 31 00:43:08.967156 kernel: PCI: CLS 0 bytes, default 64 Oct 31 00:43:08.967169 kernel: Initialise system trusted keyrings Oct 31 00:43:08.967177 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 00:43:08.967185 kernel: Key type asymmetric registered Oct 31 00:43:08.967192 kernel: Asymmetric key parser 'x509' registered Oct 31 00:43:08.967200 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 00:43:08.967208 kernel: io scheduler mq-deadline registered Oct 31 00:43:08.967216 kernel: io scheduler kyber registered Oct 31 00:43:08.967223 kernel: io scheduler bfq registered Oct 31 00:43:08.967231 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 00:43:08.967242 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 00:43:08.967250 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 00:43:08.967257 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 00:43:08.967265 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 00:43:08.967273 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 00:43:08.967280 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 00:43:08.967288 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 00:43:08.967296 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 00:43:08.967456 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 00:43:08.967472 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 00:43:08.967611 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 00:43:08.967735 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T00:43:08 UTC (1761871388) Oct 31 00:43:08.967860 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 00:43:08.967950 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 00:43:08.967961 kernel: NET: Registered PF_INET6 protocol family Oct 31 00:43:08.967970 kernel: Segment Routing with IPv6 Oct 31 00:43:08.967980 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 00:43:08.967995 kernel: NET: Registered PF_PACKET protocol family Oct 31 00:43:08.968005 kernel: Key type dns_resolver registered Oct 31 00:43:08.968015 kernel: IPI shorthand broadcast: enabled Oct 31 00:43:08.968024 kernel: sched_clock: Marking stable (1027003877, 190580039)->(1273602610, -56018694) Oct 31 00:43:08.968034 kernel: registered taskstats version 1 Oct 31 00:43:08.968043 kernel: Loading compiled-in X.509 certificates Oct 31 00:43:08.968053 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 00:43:08.968063 kernel: Key type .fscrypt registered Oct 31 00:43:08.968071 kernel: Key type fscrypt-provisioning registered Oct 31 00:43:08.968081 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 00:43:08.968089 kernel: ima: Allocated hash algorithm: sha1 Oct 31 00:43:08.968096 kernel: ima: No architecture policies found Oct 31 00:43:08.968104 kernel: clk: Disabling unused clocks Oct 31 00:43:08.968112 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 00:43:08.968119 kernel: Write protecting the kernel read-only data: 36864k Oct 31 00:43:08.968127 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 00:43:08.968135 kernel: Run /init as init process Oct 31 00:43:08.968142 kernel: with arguments: Oct 31 00:43:08.968152 kernel: /init Oct 31 00:43:08.968160 kernel: with environment: Oct 31 00:43:08.968167 kernel: HOME=/ Oct 31 00:43:08.968175 kernel: TERM=linux Oct 31 00:43:08.968185 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:43:08.968195 systemd[1]: Detected virtualization kvm. Oct 31 00:43:08.968203 systemd[1]: Detected architecture x86-64. Oct 31 00:43:08.968211 systemd[1]: Running in initrd. Oct 31 00:43:08.968221 systemd[1]: No hostname configured, using default hostname. Oct 31 00:43:08.968230 systemd[1]: Hostname set to . Oct 31 00:43:08.968238 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:43:08.968246 systemd[1]: Queued start job for default target initrd.target. Oct 31 00:43:08.968254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:43:08.968263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:43:08.968272 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 00:43:08.968280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:43:08.968291 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 00:43:08.968312 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 00:43:08.968325 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 00:43:08.968333 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 00:43:08.968352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:43:08.968361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:43:08.968369 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:43:08.968378 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:43:08.968387 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:43:08.968395 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:43:08.968403 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:43:08.968412 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:43:08.968420 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 00:43:08.968431 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 00:43:08.968440 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:43:08.968448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:43:08.968457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:43:08.968465 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:43:08.968474 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 00:43:08.968482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:43:08.968490 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 00:43:08.968499 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 00:43:08.968509 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:43:08.968518 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:43:08.968526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:43:08.968535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 00:43:08.968562 systemd-journald[193]: Collecting audit messages is disabled. Oct 31 00:43:08.968587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:43:08.968595 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 00:43:08.968609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:43:08.968618 systemd-journald[193]: Journal started Oct 31 00:43:08.968636 systemd-journald[193]: Runtime Journal (/run/log/journal/a6ef2756c2e945f297cd0375a02394fb) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:43:08.970893 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:43:08.970951 systemd-modules-load[194]: Inserted module 'overlay' Oct 31 00:43:09.042479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 00:43:09.042505 kernel: Bridge firewalling registered Oct 31 00:43:09.000187 systemd-modules-load[194]: Inserted module 'br_netfilter' Oct 31 00:43:09.043153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:43:09.043708 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:43:09.061048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:43:09.065242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:43:09.067856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:43:09.074096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:43:09.079858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:43:09.084500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:43:09.085722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:43:09.086590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:43:09.090762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:43:09.116671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:43:09.124142 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 00:43:09.125076 systemd-resolved[219]: Positive Trust Anchors: Oct 31 00:43:09.125085 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:43:09.125116 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:43:09.127626 systemd-resolved[219]: Defaulting to hostname 'linux'. Oct 31 00:43:09.128852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:43:09.129705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:43:09.157455 dracut-cmdline[230]: dracut-dracut-053 Oct 31 00:43:09.161600 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 00:43:09.265926 kernel: SCSI subsystem initialized Oct 31 00:43:09.276922 kernel: Loading iSCSI transport class v2.0-870. Oct 31 00:43:09.289921 kernel: iscsi: registered transport (tcp) Oct 31 00:43:09.312238 kernel: iscsi: registered transport (qla4xxx) Oct 31 00:43:09.312265 kernel: QLogic iSCSI HBA Driver Oct 31 00:43:09.367704 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 00:43:09.381068 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 00:43:09.406400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 00:43:09.406444 kernel: device-mapper: uevent: version 1.0.3 Oct 31 00:43:09.407997 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 00:43:09.449909 kernel: raid6: avx2x4 gen() 30319 MB/s Oct 31 00:43:09.466897 kernel: raid6: avx2x2 gen() 30897 MB/s Oct 31 00:43:09.484719 kernel: raid6: avx2x1 gen() 25652 MB/s Oct 31 00:43:09.484746 kernel: raid6: using algorithm avx2x2 gen() 30897 MB/s Oct 31 00:43:09.502745 kernel: raid6: .... xor() 19866 MB/s, rmw enabled Oct 31 00:43:09.502778 kernel: raid6: using avx2x2 recovery algorithm Oct 31 00:43:09.523916 kernel: xor: automatically using best checksumming function avx Oct 31 00:43:09.681931 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 00:43:09.697167 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:43:09.714054 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:43:09.726650 systemd-udevd[412]: Using default interface naming scheme 'v255'. Oct 31 00:43:09.731480 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:43:09.744113 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 00:43:09.763488 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Oct 31 00:43:09.804158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:43:09.823277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:43:09.893200 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:43:09.903073 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 00:43:09.916532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 00:43:09.921909 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:43:09.926243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:43:09.928261 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:43:09.941418 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 00:43:09.951484 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 00:43:09.951539 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 00:43:09.953632 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:43:09.963906 kernel: libata version 3.00 loaded. Oct 31 00:43:09.966945 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 00:43:09.968533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:43:09.974066 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 00:43:09.974279 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 00:43:09.970592 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:43:09.992163 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 00:43:09.992399 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 00:43:09.992549 kernel: scsi host0: ahci Oct 31 00:43:09.992732 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 00:43:09.992765 kernel: GPT:9289727 != 19775487 Oct 31 00:43:09.992779 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 00:43:09.979782 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:43:09.999384 kernel: GPT:9289727 != 19775487 Oct 31 00:43:09.999400 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 00:43:09.999411 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:43:09.999421 kernel: scsi host1: ahci Oct 31 00:43:09.982065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:43:10.005938 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 00:43:10.005959 kernel: scsi host2: ahci Oct 31 00:43:10.007138 kernel: scsi host3: ahci Oct 31 00:43:10.007311 kernel: scsi host4: ahci Oct 31 00:43:10.007493 kernel: AES CTR mode by8 optimization enabled Oct 31 00:43:09.982257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:43:10.014962 kernel: scsi host5: ahci Oct 31 00:43:10.015161 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 31 00:43:10.015174 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 31 00:43:10.015184 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 31 00:43:10.015195 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 31 00:43:09.999587 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:43:10.027118 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 31 00:43:10.027140 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 31 00:43:10.029188 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:43:10.034693 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Oct 31 00:43:10.034713 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Oct 31 00:43:10.048653 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 00:43:10.126631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:43:10.144245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 00:43:10.152442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 00:43:10.156712 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 00:43:10.167446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:43:10.178036 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 00:43:10.182972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 00:43:10.206994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:43:10.322970 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 00:43:10.325888 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 00:43:10.325916 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 00:43:10.327934 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 00:43:10.328027 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 00:43:10.329392 kernel: ata3.00: applying bridge limits Oct 31 00:43:10.329908 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 00:43:10.330910 kernel: ata3.00: configured for UDMA/100 Oct 31 00:43:10.334922 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 00:43:10.335032 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 00:43:10.388668 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 00:43:10.388930 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 00:43:10.401903 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 00:43:10.732022 disk-uuid[553]: Primary Header is updated. Oct 31 00:43:10.732022 disk-uuid[553]: Secondary Entries is updated. Oct 31 00:43:10.732022 disk-uuid[553]: Secondary Header is updated. Oct 31 00:43:10.737922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:43:10.739906 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:43:11.743907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 00:43:11.743996 disk-uuid[579]: The operation has completed successfully. Oct 31 00:43:11.775193 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 00:43:11.775383 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 00:43:11.804051 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 00:43:11.809093 sh[590]: Success Oct 31 00:43:11.824906 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 00:43:11.859698 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 00:43:11.878640 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 00:43:11.881629 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 00:43:11.897815 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 00:43:11.897849 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:43:11.897860 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 00:43:11.900787 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 00:43:11.900806 kernel: BTRFS info (device dm-0): using free space tree Oct 31 00:43:11.906715 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 00:43:11.908162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 00:43:11.919059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 00:43:11.920589 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 00:43:11.933332 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:43:11.933373 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:43:11.933408 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:43:11.937996 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:43:11.947447 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 00:43:11.950437 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:43:11.960370 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 00:43:11.970039 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 00:43:12.125748 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:43:12.134986 ignition[688]: Ignition 2.19.0 Oct 31 00:43:12.135165 ignition[688]: Stage: fetch-offline Oct 31 00:43:12.136067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:43:12.135218 ignition[688]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:12.135232 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:12.135394 ignition[688]: parsed url from cmdline: "" Oct 31 00:43:12.135399 ignition[688]: no config URL provided Oct 31 00:43:12.135405 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 00:43:12.135415 ignition[688]: no config at "/usr/lib/ignition/user.ign" Oct 31 00:43:12.135448 ignition[688]: op(1): [started] loading QEMU firmware config module Oct 31 00:43:12.135454 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 00:43:12.145591 ignition[688]: op(1): [finished] loading QEMU firmware config module Oct 31 00:43:12.161147 systemd-networkd[777]: lo: Link UP Oct 31 00:43:12.161158 systemd-networkd[777]: lo: Gained carrier Oct 31 00:43:12.162914 systemd-networkd[777]: Enumeration completed Oct 31 00:43:12.163004 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:43:12.163352 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:43:12.163356 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:43:12.179241 systemd[1]: Reached target network.target - Network. Oct 31 00:43:12.204748 systemd-networkd[777]: eth0: Link UP Oct 31 00:43:12.204762 systemd-networkd[777]: eth0: Gained carrier Oct 31 00:43:12.204778 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:43:12.226919 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:43:12.256166 ignition[688]: parsing config with SHA512: 04724cdd43328758e0501acdc83d2971ecbef4eee3bb6eb5011b455adcebb6eee7aaf1aaeacd0ee13dd7222f4f064f33947c648c5182ef1e42cc34deed17c6c4 Oct 31 00:43:12.260507 unknown[688]: fetched base config from "system" Oct 31 00:43:12.260522 unknown[688]: fetched user config from "qemu" Oct 31 00:43:12.260912 ignition[688]: fetch-offline: fetch-offline passed Oct 31 00:43:12.260996 ignition[688]: Ignition finished successfully Oct 31 00:43:12.266229 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:43:12.270814 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 00:43:12.284071 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 00:43:12.299938 ignition[782]: Ignition 2.19.0 Oct 31 00:43:12.299950 ignition[782]: Stage: kargs Oct 31 00:43:12.300147 ignition[782]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:12.300161 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:12.301136 ignition[782]: kargs: kargs passed Oct 31 00:43:12.301187 ignition[782]: Ignition finished successfully Oct 31 00:43:12.308444 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 00:43:12.322052 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 00:43:12.345105 ignition[791]: Ignition 2.19.0 Oct 31 00:43:12.345117 ignition[791]: Stage: disks Oct 31 00:43:12.345318 ignition[791]: no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:12.345332 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:12.346184 ignition[791]: disks: disks passed Oct 31 00:43:12.346229 ignition[791]: Ignition finished successfully Oct 31 00:43:12.354562 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 00:43:12.355445 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 00:43:12.355760 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 00:43:12.361617 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:43:12.365484 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:43:12.368587 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:43:12.395369 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 00:43:12.410740 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 31 00:43:12.418165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 00:43:12.433974 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 00:43:12.525914 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 00:43:12.526674 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 00:43:12.528031 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 00:43:12.543014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:43:12.547333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 00:43:12.552096 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Oct 31 00:43:12.552121 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:43:12.552289 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 00:43:12.561022 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:43:12.561041 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:43:12.561053 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:43:12.552367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 00:43:12.552411 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:43:12.570547 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:43:12.573586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 00:43:12.589054 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 00:43:12.692903 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 00:43:12.700782 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Oct 31 00:43:12.707072 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 00:43:12.713063 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 00:43:12.825337 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 00:43:12.842982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 00:43:12.847343 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 00:43:12.859916 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:43:12.884866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 00:43:12.892231 ignition[922]: INFO : Ignition 2.19.0 Oct 31 00:43:12.892231 ignition[922]: INFO : Stage: mount Oct 31 00:43:12.894802 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:12.894802 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:12.896525 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 00:43:12.900519 ignition[922]: INFO : mount: mount passed Oct 31 00:43:12.901744 ignition[922]: INFO : Ignition finished successfully Oct 31 00:43:12.905393 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 00:43:12.917047 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 00:43:12.926650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 00:43:12.940900 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Oct 31 00:43:12.945050 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 00:43:12.945117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 00:43:12.945129 kernel: BTRFS info (device vda6): using free space tree Oct 31 00:43:12.949907 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 00:43:12.952247 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 00:43:12.983028 ignition[955]: INFO : Ignition 2.19.0 Oct 31 00:43:12.983028 ignition[955]: INFO : Stage: files Oct 31 00:43:12.985847 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:12.985847 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:12.985847 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Oct 31 00:43:12.985847 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 00:43:12.985847 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 00:43:13.080257 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 00:43:13.082973 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 00:43:13.082973 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 00:43:13.082973 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:43:13.082973 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 00:43:13.081195 unknown[955]: wrote ssh authorized keys file for user: core Oct 31 00:43:13.125886 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 00:43:13.323941 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 00:43:13.323941 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:43:13.330390 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 00:43:13.746767 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 00:43:14.196212 systemd-networkd[777]: eth0: Gained IPv6LL Oct 31 00:43:14.614002 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 00:43:14.614002 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 00:43:14.619987 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 00:43:14.654738 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:43:14.660537 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 00:43:14.663466 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 00:43:14.663466 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 00:43:14.663466 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 00:43:14.663466 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:43:14.663466 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 00:43:14.663466 ignition[955]: INFO : files: files passed Oct 31 00:43:14.663466 ignition[955]: INFO : Ignition finished successfully Oct 31 00:43:14.680166 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 00:43:14.696162 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 00:43:14.701142 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 00:43:14.705514 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 00:43:14.707180 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 00:43:14.713838 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 00:43:14.719208 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:43:14.719208 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:43:14.724677 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 00:43:14.729105 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:43:14.733627 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 00:43:14.740120 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 00:43:14.769723 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 00:43:14.769892 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 00:43:14.773649 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 00:43:14.777069 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 00:43:14.778851 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 00:43:14.790065 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 00:43:14.808963 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:43:14.827046 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 00:43:14.839846 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:43:14.842661 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:43:14.920310 ignition[1010]: INFO : Ignition 2.19.0 Oct 31 00:43:14.920310 ignition[1010]: INFO : Stage: umount Oct 31 00:43:14.920310 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 00:43:14.920310 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 00:43:14.920310 ignition[1010]: INFO : umount: umount passed Oct 31 00:43:14.920310 ignition[1010]: INFO : Ignition finished successfully Oct 31 00:43:14.843442 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 00:43:14.843682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 00:43:14.843795 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 00:43:14.844234 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 00:43:14.844508 systemd[1]: Stopped target basic.target - Basic System. Oct 31 00:43:14.844777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 00:43:14.845333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 00:43:14.845602 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 00:43:14.845889 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 00:43:14.846173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 00:43:14.846760 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 00:43:14.847353 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 00:43:14.847599 systemd[1]: Stopped target swap.target - Swaps. Oct 31 00:43:14.847840 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 00:43:14.847966 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 00:43:14.848476 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:43:14.848755 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:43:14.849275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 00:43:14.849383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:43:14.849558 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 00:43:14.849665 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 00:43:14.850417 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 00:43:14.850528 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 00:43:14.850774 systemd[1]: Stopped target paths.target - Path Units. Oct 31 00:43:14.851249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 00:43:14.854926 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:43:14.855467 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 00:43:14.855726 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 00:43:14.856314 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 00:43:14.856408 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 00:43:14.856609 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 00:43:14.856697 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 00:43:14.856894 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 00:43:14.857019 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 00:43:14.857198 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 00:43:14.857301 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 00:43:14.858293 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 00:43:14.859252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 00:43:14.859381 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 00:43:14.859485 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:43:14.859734 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 00:43:14.859833 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 00:43:14.863460 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 00:43:14.863568 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 00:43:14.886384 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 00:43:14.886522 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 00:43:14.888698 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 00:43:14.889093 systemd[1]: Stopped target network.target - Network. Oct 31 00:43:14.889180 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 00:43:14.889246 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 00:43:14.889510 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 00:43:14.889554 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 00:43:14.889805 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 00:43:14.889849 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 00:43:14.890410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 00:43:14.890455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 00:43:14.890835 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 00:43:14.891446 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 00:43:14.914988 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 00:43:14.915165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 00:43:14.919076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 00:43:14.919169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:43:14.924963 systemd-networkd[777]: eth0: DHCPv6 lease lost Oct 31 00:43:14.957411 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 00:43:14.957618 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 00:43:14.960407 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 00:43:14.960490 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:43:14.978027 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 00:43:14.978999 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 00:43:14.979076 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 00:43:14.979683 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 00:43:14.979745 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:43:14.987700 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 00:43:14.987755 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 00:43:14.992149 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:43:15.020071 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 00:43:15.020279 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:43:15.021784 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 00:43:15.021837 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 00:43:15.025314 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 00:43:15.025374 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:43:15.028497 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 00:43:15.028554 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 00:43:15.032210 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 00:43:15.032291 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 00:43:15.037673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 00:43:15.037741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 00:43:15.050104 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 00:43:15.050638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 00:43:15.050744 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:43:15.051300 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 00:43:15.051379 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:43:15.051917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 00:43:15.051989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:43:15.117859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 00:43:15.118002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:43:15.119135 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 00:43:15.119291 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 00:43:15.284420 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 00:43:15.284599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 00:43:15.530419 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 00:43:15.530608 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 00:43:15.532564 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 00:43:15.535740 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 00:43:15.535865 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 00:43:15.550148 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 00:43:15.561488 systemd[1]: Switching root. Oct 31 00:43:15.600178 systemd-journald[193]: Journal stopped Oct 31 00:43:17.395426 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Oct 31 00:43:17.395506 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 00:43:17.395525 kernel: SELinux: policy capability open_perms=1 Oct 31 00:43:17.395536 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 00:43:17.395553 kernel: SELinux: policy capability always_check_network=0 Oct 31 00:43:17.395576 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 00:43:17.395592 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 00:43:17.395604 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 00:43:17.395615 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 00:43:17.395627 kernel: audit: type=1403 audit(1761871396.406:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 00:43:17.395639 systemd[1]: Successfully loaded SELinux policy in 46.344ms. Oct 31 00:43:17.395655 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.264ms. Oct 31 00:43:17.395667 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 00:43:17.395680 systemd[1]: Detected virtualization kvm. Oct 31 00:43:17.395698 systemd[1]: Detected architecture x86-64. Oct 31 00:43:17.395709 systemd[1]: Detected first boot. Oct 31 00:43:17.395721 systemd[1]: Initializing machine ID from VM UUID. Oct 31 00:43:17.395733 zram_generator::config[1055]: No configuration found. Oct 31 00:43:17.395746 systemd[1]: Populated /etc with preset unit settings. Oct 31 00:43:17.395758 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 00:43:17.395770 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 00:43:17.395782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 00:43:17.395800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 00:43:17.395813 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 00:43:17.395825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 00:43:17.395840 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 00:43:17.395852 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 00:43:17.395864 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 00:43:17.395892 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 00:43:17.395904 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 00:43:17.395924 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 00:43:17.395936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 00:43:17.395948 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 00:43:17.395960 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 00:43:17.395972 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 00:43:17.395984 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 00:43:17.395996 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 00:43:17.396008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 00:43:17.396020 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 00:43:17.396038 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 00:43:17.396050 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 00:43:17.396061 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 00:43:17.396073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 00:43:17.396085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 00:43:17.396097 systemd[1]: Reached target slices.target - Slice Units. Oct 31 00:43:17.396109 systemd[1]: Reached target swap.target - Swaps. Oct 31 00:43:17.396130 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 00:43:17.396149 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 00:43:17.396161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 00:43:17.396174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 00:43:17.396186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 00:43:17.396198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 00:43:17.396210 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 00:43:17.396222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 00:43:17.396233 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 00:43:17.396245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:17.396263 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 00:43:17.396275 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 00:43:17.396287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 00:43:17.396299 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 00:43:17.396311 systemd[1]: Reached target machines.target - Containers. Oct 31 00:43:17.396322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 00:43:17.396334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:43:17.396346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 00:43:17.396364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 00:43:17.396376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:43:17.396388 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:43:17.396401 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:43:17.396413 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 00:43:17.396424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:43:17.396436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 00:43:17.396449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 00:43:17.396461 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 00:43:17.396476 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 00:43:17.396488 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 00:43:17.396499 kernel: fuse: init (API version 7.39) Oct 31 00:43:17.396511 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 00:43:17.396523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 00:43:17.396534 kernel: ACPI: bus type drm_connector registered Oct 31 00:43:17.396546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 00:43:17.396557 kernel: loop: module loaded Oct 31 00:43:17.396569 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 00:43:17.396584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 00:43:17.396595 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 00:43:17.396628 systemd-journald[1132]: Collecting audit messages is disabled. Oct 31 00:43:17.396650 systemd[1]: Stopped verity-setup.service. Oct 31 00:43:17.396663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:17.396675 systemd-journald[1132]: Journal started Oct 31 00:43:17.396706 systemd-journald[1132]: Runtime Journal (/run/log/journal/a6ef2756c2e945f297cd0375a02394fb) is 6.0M, max 48.4M, 42.3M free. Oct 31 00:43:17.060207 systemd[1]: Queued start job for default target multi-user.target. Oct 31 00:43:17.081957 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 00:43:17.082659 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 00:43:17.402982 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 00:43:17.403800 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 00:43:17.405608 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 00:43:17.407487 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 00:43:17.409212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 00:43:17.411062 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 00:43:17.412954 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 00:43:17.414794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 00:43:17.416988 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 00:43:17.419314 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 00:43:17.419499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 00:43:17.421719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:43:17.421919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:43:17.424208 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:43:17.424392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:43:17.426406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:43:17.426594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:43:17.428957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 00:43:17.429144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 00:43:17.431167 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:43:17.431351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:43:17.433644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 00:43:17.436184 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 00:43:17.438776 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 00:43:17.456983 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 00:43:17.464985 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 00:43:17.468429 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 00:43:17.470467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 00:43:17.470646 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 00:43:17.473728 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 00:43:17.477474 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 00:43:17.486619 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 00:43:17.488758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:43:17.491380 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 00:43:17.499769 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 00:43:17.502618 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:43:17.522060 systemd-journald[1132]: Time spent on flushing to /var/log/journal/a6ef2756c2e945f297cd0375a02394fb is 15.878ms for 945 entries. Oct 31 00:43:17.522060 systemd-journald[1132]: System Journal (/var/log/journal/a6ef2756c2e945f297cd0375a02394fb) is 8.0M, max 195.6M, 187.6M free. Oct 31 00:43:17.551224 systemd-journald[1132]: Received client request to flush runtime journal. Oct 31 00:43:17.507103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 00:43:17.510244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:43:17.517098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 00:43:17.556945 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 00:43:17.562375 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 00:43:17.573460 kernel: loop0: detected capacity change from 0 to 142488 Oct 31 00:43:17.573436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 00:43:17.576464 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 00:43:17.578556 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 00:43:17.581431 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 00:43:17.584361 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 00:43:17.589510 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 00:43:17.598784 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 00:43:17.608902 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 00:43:17.658128 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 00:43:17.661936 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 00:43:17.665341 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 00:43:17.693404 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 00:43:17.694244 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 00:43:17.698563 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 31 00:43:17.698591 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 31 00:43:17.700546 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 00:43:17.708896 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 00:43:17.717033 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 00:43:17.718049 kernel: loop1: detected capacity change from 0 to 219144 Oct 31 00:43:17.763869 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 00:43:17.800967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 00:43:17.813164 kernel: loop2: detected capacity change from 0 to 140768 Oct 31 00:43:17.828339 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 31 00:43:17.828368 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 31 00:43:17.836673 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 00:43:17.915941 kernel: loop3: detected capacity change from 0 to 142488 Oct 31 00:43:17.931916 kernel: loop4: detected capacity change from 0 to 219144 Oct 31 00:43:17.941910 kernel: loop5: detected capacity change from 0 to 140768 Oct 31 00:43:17.958591 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 31 00:43:17.959434 (sd-merge)[1197]: Merged extensions into '/usr'. Oct 31 00:43:17.963745 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 00:43:17.963854 systemd[1]: Reloading... Oct 31 00:43:18.120961 zram_generator::config[1220]: No configuration found. Oct 31 00:43:18.264750 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 00:43:18.393539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:43:18.443658 systemd[1]: Reloading finished in 479 ms. Oct 31 00:43:18.484427 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 00:43:18.486798 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 00:43:18.514076 systemd[1]: Starting ensure-sysext.service... Oct 31 00:43:18.517210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 00:43:18.563920 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Oct 31 00:43:18.563938 systemd[1]: Reloading... Oct 31 00:43:18.624840 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 00:43:18.625422 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 00:43:18.626895 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 00:43:18.627343 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 31 00:43:18.627447 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Oct 31 00:43:18.635037 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:43:18.635052 systemd-tmpfiles[1261]: Skipping /boot Oct 31 00:43:18.651902 zram_generator::config[1289]: No configuration found. Oct 31 00:43:18.653207 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 00:43:18.653301 systemd-tmpfiles[1261]: Skipping /boot Oct 31 00:43:18.797408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:43:18.847513 systemd[1]: Reloading finished in 283 ms. Oct 31 00:43:18.866862 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 00:43:18.879460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 00:43:18.888384 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:43:18.892037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 00:43:18.895406 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 00:43:18.902225 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 00:43:18.907121 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 00:43:18.914585 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 00:43:18.921313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:18.921984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:43:18.925955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:43:18.940248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:43:18.945169 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:43:18.947490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:43:18.950860 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 00:43:18.952800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:18.954667 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 00:43:18.957830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:43:18.958088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:43:18.960746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:43:18.961034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:43:18.964024 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:43:18.964222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:43:18.967261 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Oct 31 00:43:18.980494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:43:18.980669 augenrules[1356]: No rules Oct 31 00:43:18.980727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:43:18.982835 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 00:43:18.985627 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:43:18.993799 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:18.994160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:43:18.999967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:43:19.059146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:43:19.070333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:43:19.079148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:43:19.079325 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:19.080390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 00:43:19.087389 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 00:43:19.091243 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 00:43:19.105214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1365) Oct 31 00:43:19.100845 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 00:43:19.108385 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 00:43:19.111257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:43:19.111552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:43:19.114470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:43:19.115574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:43:19.119658 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:43:19.119895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:43:19.174252 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 00:43:19.183272 systemd[1]: Finished ensure-sysext.service. Oct 31 00:43:19.201352 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:19.201582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 00:43:19.206925 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 00:43:19.210092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 00:43:19.214383 systemd-resolved[1331]: Positive Trust Anchors: Oct 31 00:43:19.214593 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 00:43:19.214636 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 00:43:19.219455 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 00:43:19.221739 systemd-resolved[1331]: Defaulting to hostname 'linux'. Oct 31 00:43:19.223406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 00:43:19.227637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 00:43:19.229682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 00:43:19.232269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 00:43:19.242199 kernel: ACPI: button: Power Button [PWRF] Oct 31 00:43:19.249178 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 00:43:19.251210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 00:43:19.251249 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 00:43:19.253382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 00:43:19.254910 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 00:43:19.259384 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 00:43:19.259617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 00:43:19.263403 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 00:43:19.263915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 00:43:19.266222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 00:43:19.266894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 00:43:19.269301 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 00:43:19.269516 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 00:43:19.274143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 00:43:19.378942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 00:43:19.387726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 00:43:19.388193 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 00:43:19.388387 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 00:43:19.388534 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 00:43:19.402191 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 00:43:19.404244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 00:43:19.404315 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 00:43:19.447100 systemd-networkd[1409]: lo: Link UP Oct 31 00:43:19.447461 systemd-networkd[1409]: lo: Gained carrier Oct 31 00:43:19.451457 systemd-networkd[1409]: Enumeration completed Oct 31 00:43:19.451670 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 00:43:19.454169 systemd[1]: Reached target network.target - Network. Oct 31 00:43:19.456408 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:43:19.456416 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 00:43:19.457474 systemd-networkd[1409]: eth0: Link UP Oct 31 00:43:19.457481 systemd-networkd[1409]: eth0: Gained carrier Oct 31 00:43:19.457494 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 00:43:19.463404 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 00:43:19.468966 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 00:43:19.469194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 00:43:19.484584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 00:43:19.547249 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 00:43:20.192396 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 00:43:20.192401 systemd-resolved[1331]: Clock change detected. Flushing caches. Oct 31 00:43:20.192821 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2025-10-31 00:43:20.192279 UTC. Oct 31 00:43:20.194773 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 00:43:20.200536 kernel: kvm_amd: TSC scaling supported Oct 31 00:43:20.200581 kernel: kvm_amd: Nested Virtualization enabled Oct 31 00:43:20.200598 kernel: kvm_amd: Nested Paging enabled Oct 31 00:43:20.201500 kernel: kvm_amd: LBR virtualization supported Oct 31 00:43:20.202777 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 00:43:20.203900 kernel: kvm_amd: Virtual GIF supported Oct 31 00:43:20.226141 kernel: EDAC MC: Ver: 3.0.0 Oct 31 00:43:20.260594 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 00:43:20.317180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 00:43:20.319712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 00:43:20.333315 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:43:20.371275 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 00:43:20.373743 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 00:43:20.375689 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 00:43:20.377877 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 00:43:20.380156 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 00:43:20.382768 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 00:43:20.384783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 00:43:20.386954 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 00:43:20.389199 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 00:43:20.389245 systemd[1]: Reached target paths.target - Path Units. Oct 31 00:43:20.390876 systemd[1]: Reached target timers.target - Timer Units. Oct 31 00:43:20.393599 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 00:43:20.397326 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 00:43:20.414873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 00:43:20.418138 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 00:43:20.421067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 00:43:20.423086 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 00:43:20.424834 systemd[1]: Reached target basic.target - Basic System. Oct 31 00:43:20.426535 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:43:20.426575 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 00:43:20.427812 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 00:43:20.430582 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 00:43:20.433138 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 00:43:20.437072 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 00:43:20.441843 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 00:43:20.444319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 00:43:20.447229 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 00:43:20.451090 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 00:43:20.457133 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 00:43:20.465035 jq[1438]: false Oct 31 00:43:20.468140 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 00:43:20.474144 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 00:43:20.474798 dbus-daemon[1437]: [system] SELinux support is enabled Oct 31 00:43:20.477178 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 00:43:20.477736 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 00:43:20.479085 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 00:43:20.483300 extend-filesystems[1439]: Found loop3 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found loop4 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found loop5 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found sr0 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda1 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda2 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda3 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found usr Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda4 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda6 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda7 Oct 31 00:43:20.483300 extend-filesystems[1439]: Found vda9 Oct 31 00:43:20.483300 extend-filesystems[1439]: Checking size of /dev/vda9 Oct 31 00:43:20.564808 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 00:43:20.564866 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Oct 31 00:43:20.564888 extend-filesystems[1439]: Resized partition /dev/vda9 Oct 31 00:43:20.483683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 00:43:20.566845 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Oct 31 00:43:20.487044 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 00:43:20.502437 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 00:43:20.569277 jq[1455]: true Oct 31 00:43:20.521536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 00:43:20.521820 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 00:43:20.522303 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 00:43:20.522589 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 00:43:20.531576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 00:43:20.531850 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 00:43:20.553916 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 00:43:20.554407 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 00:43:20.555730 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 00:43:20.555749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 00:43:20.570354 jq[1463]: true Oct 31 00:43:20.572706 update_engine[1452]: I20251031 00:43:20.572604 1452 main.cc:92] Flatcar Update Engine starting Oct 31 00:43:20.574990 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 00:43:20.576875 update_engine[1452]: I20251031 00:43:20.575886 1452 update_check_scheduler.cc:74] Next update check in 10m22s Oct 31 00:43:20.584430 systemd[1]: Started update-engine.service - Update Engine. Oct 31 00:43:20.591948 tar[1462]: linux-amd64/LICENSE Oct 31 00:43:20.604200 tar[1462]: linux-amd64/helm Oct 31 00:43:20.601194 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 00:43:20.606439 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 00:43:20.606439 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 00:43:20.606439 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 00:43:20.603372 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 00:43:20.625012 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Oct 31 00:43:20.606068 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 00:43:20.606100 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 00:43:20.606817 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 00:43:20.607008 systemd-logind[1450]: New seat seat0. Oct 31 00:43:20.607302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 00:43:20.615811 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 00:43:20.640850 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 00:43:20.665952 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 00:43:20.693837 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 00:43:20.727539 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 00:43:20.735390 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 00:43:20.735695 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 00:43:20.794106 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 00:43:20.962876 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 00:43:20.995674 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 00:43:21.010479 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 00:43:21.013441 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 00:43:21.176160 systemd-networkd[1409]: eth0: Gained IPv6LL Oct 31 00:43:21.180009 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 00:43:21.183915 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 00:43:21.192512 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 00:43:21.204034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:21.232886 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 00:43:21.271415 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 00:43:21.284418 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 00:43:21.285237 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 00:43:21.291706 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 00:43:21.532348 tar[1462]: linux-amd64/README.md Oct 31 00:43:21.553302 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 00:43:21.584296 containerd[1476]: time="2025-10-31T00:43:21.584149193Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 00:43:21.650487 containerd[1476]: time="2025-10-31T00:43:21.650382073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653249 containerd[1476]: time="2025-10-31T00:43:21.653196992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653249 containerd[1476]: time="2025-10-31T00:43:21.653236907Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 00:43:21.653346 containerd[1476]: time="2025-10-31T00:43:21.653258427Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 00:43:21.653520 containerd[1476]: time="2025-10-31T00:43:21.653496273Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 00:43:21.653552 containerd[1476]: time="2025-10-31T00:43:21.653531019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653649 containerd[1476]: time="2025-10-31T00:43:21.653626828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653649 containerd[1476]: time="2025-10-31T00:43:21.653645503Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653896 containerd[1476]: time="2025-10-31T00:43:21.653874172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653896 containerd[1476]: time="2025-10-31T00:43:21.653893669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653973 containerd[1476]: time="2025-10-31T00:43:21.653939154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:43:21.653973 containerd[1476]: time="2025-10-31T00:43:21.653951908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.654090 containerd[1476]: time="2025-10-31T00:43:21.654059329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.654414 containerd[1476]: time="2025-10-31T00:43:21.654383417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 00:43:21.654549 containerd[1476]: time="2025-10-31T00:43:21.654521586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 00:43:21.654549 containerd[1476]: time="2025-10-31T00:43:21.654541123Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 00:43:21.654665 containerd[1476]: time="2025-10-31T00:43:21.654643835Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 00:43:21.654737 containerd[1476]: time="2025-10-31T00:43:21.654718716Z" level=info msg="metadata content store policy set" policy=shared Oct 31 00:43:21.656355 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Oct 31 00:43:21.659035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 00:43:21.661970 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 00:43:22.189229 containerd[1476]: time="2025-10-31T00:43:22.189132633Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 00:43:22.189229 containerd[1476]: time="2025-10-31T00:43:22.189233021Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 00:43:22.189407 containerd[1476]: time="2025-10-31T00:43:22.189256305Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 00:43:22.189407 containerd[1476]: time="2025-10-31T00:43:22.189299025Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 00:43:22.189407 containerd[1476]: time="2025-10-31T00:43:22.189318221Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 00:43:22.189582 containerd[1476]: time="2025-10-31T00:43:22.189552911Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 00:43:22.190125 containerd[1476]: time="2025-10-31T00:43:22.190073026Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 00:43:22.190425 containerd[1476]: time="2025-10-31T00:43:22.190401623Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 00:43:22.190425 containerd[1476]: time="2025-10-31T00:43:22.190423984Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190438281Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190455484Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190469901Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190482614Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190498504Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190518 containerd[1476]: time="2025-10-31T00:43:22.190513442Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190532147Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190547947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190560330Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190585327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190601177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190630041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190658 containerd[1476]: time="2025-10-31T00:43:22.190654767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190690885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190717715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190736601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190759454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190780333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190810660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.190846 containerd[1476]: time="2025-10-31T00:43:22.190840035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191070 containerd[1476]: time="2025-10-31T00:43:22.190873528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191070 containerd[1476]: time="2025-10-31T00:43:22.190896360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191070 containerd[1476]: time="2025-10-31T00:43:22.190944501Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 00:43:22.191070 containerd[1476]: time="2025-10-31T00:43:22.190996588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191070 containerd[1476]: time="2025-10-31T00:43:22.191038046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191194 containerd[1476]: time="2025-10-31T00:43:22.191080416Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 00:43:22.191222 containerd[1476]: time="2025-10-31T00:43:22.191186925Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 00:43:22.191249 containerd[1476]: time="2025-10-31T00:43:22.191224055Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 00:43:22.191249 containerd[1476]: time="2025-10-31T00:43:22.191237129Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 00:43:22.191391 containerd[1476]: time="2025-10-31T00:43:22.191249973Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 00:43:22.191391 containerd[1476]: time="2025-10-31T00:43:22.191260623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.191391 containerd[1476]: time="2025-10-31T00:43:22.191282895Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 00:43:22.191391 containerd[1476]: time="2025-10-31T00:43:22.191323922Z" level=info msg="NRI interface is disabled by configuration." Oct 31 00:43:22.191391 containerd[1476]: time="2025-10-31T00:43:22.191352456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 00:43:22.193040 containerd[1476]: time="2025-10-31T00:43:22.191815835Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 00:43:22.193040 containerd[1476]: time="2025-10-31T00:43:22.192555953Z" level=info msg="Connect containerd service" Oct 31 00:43:22.193040 containerd[1476]: time="2025-10-31T00:43:22.192645521Z" level=info msg="using legacy CRI server" Oct 31 00:43:22.193040 containerd[1476]: time="2025-10-31T00:43:22.192660168Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 00:43:22.193040 containerd[1476]: time="2025-10-31T00:43:22.192839655Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 00:43:22.193761 containerd[1476]: time="2025-10-31T00:43:22.193726458Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:43:22.194001 containerd[1476]: time="2025-10-31T00:43:22.193916775Z" level=info msg="Start subscribing containerd event" Oct 31 00:43:22.194389 containerd[1476]: time="2025-10-31T00:43:22.194357792Z" level=info msg="Start recovering state" Oct 31 00:43:22.194453 containerd[1476]: time="2025-10-31T00:43:22.194376627Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 00:43:22.194503 containerd[1476]: time="2025-10-31T00:43:22.194484139Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 00:43:22.194528 containerd[1476]: time="2025-10-31T00:43:22.194505499Z" level=info msg="Start event monitor" Oct 31 00:43:22.194570 containerd[1476]: time="2025-10-31T00:43:22.194532870Z" level=info msg="Start snapshots syncer" Oct 31 00:43:22.194606 containerd[1476]: time="2025-10-31T00:43:22.194574388Z" level=info msg="Start cni network conf syncer for default" Oct 31 00:43:22.194606 containerd[1476]: time="2025-10-31T00:43:22.194594175Z" level=info msg="Start streaming server" Oct 31 00:43:22.195089 containerd[1476]: time="2025-10-31T00:43:22.194741451Z" level=info msg="containerd successfully booted in 0.613002s" Oct 31 00:43:22.194872 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 00:43:23.134125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:23.136694 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 00:43:23.141002 systemd[1]: Startup finished in 1.175s (kernel) + 7.676s (initrd) + 6.135s (userspace) = 14.987s. Oct 31 00:43:23.166289 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:43:23.469554 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 00:43:23.478220 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:39402.service - OpenSSH per-connection server daemon (10.0.0.1:39402). Oct 31 00:43:23.523380 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 39402 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:23.525848 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:23.537037 systemd-logind[1450]: New session 1 of user core. Oct 31 00:43:23.538819 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 00:43:23.545242 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 00:43:23.568038 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 00:43:23.584427 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 00:43:23.588558 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 00:43:23.786604 systemd[1564]: Queued start job for default target default.target. Oct 31 00:43:23.804412 systemd[1564]: Created slice app.slice - User Application Slice. Oct 31 00:43:23.804442 systemd[1564]: Reached target paths.target - Paths. Oct 31 00:43:23.804457 systemd[1564]: Reached target timers.target - Timers. Oct 31 00:43:23.806464 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 00:43:23.851211 kubelet[1549]: E1031 00:43:23.851148 1549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:43:23.856362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:43:23.856634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:43:23.857109 systemd[1]: kubelet.service: Consumed 2.233s CPU time. Oct 31 00:43:23.872969 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 00:43:23.873154 systemd[1564]: Reached target sockets.target - Sockets. Oct 31 00:43:23.873175 systemd[1564]: Reached target basic.target - Basic System. Oct 31 00:43:23.873223 systemd[1564]: Reached target default.target - Main User Target. Oct 31 00:43:23.873263 systemd[1564]: Startup finished in 273ms. Oct 31 00:43:23.873787 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 00:43:23.888088 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 00:43:23.954458 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:39416.service - OpenSSH per-connection server daemon (10.0.0.1:39416). Oct 31 00:43:23.992657 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 39416 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:23.994496 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:23.998665 systemd-logind[1450]: New session 2 of user core. Oct 31 00:43:24.020111 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 00:43:24.076376 sshd[1578]: pam_unix(sshd:session): session closed for user core Oct 31 00:43:24.092713 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:39416.service: Deactivated successfully. Oct 31 00:43:24.094429 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 00:43:24.096326 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Oct 31 00:43:24.097715 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:39418.service - OpenSSH per-connection server daemon (10.0.0.1:39418). Oct 31 00:43:24.098555 systemd-logind[1450]: Removed session 2. Oct 31 00:43:24.135865 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39418 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:24.137679 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:24.141423 systemd-logind[1450]: New session 3 of user core. Oct 31 00:43:24.159055 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 00:43:24.291450 sshd[1585]: pam_unix(sshd:session): session closed for user core Oct 31 00:43:24.298941 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:39418.service: Deactivated successfully. Oct 31 00:43:24.300895 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 00:43:24.302517 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Oct 31 00:43:24.314201 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:39434.service - OpenSSH per-connection server daemon (10.0.0.1:39434). Oct 31 00:43:24.315220 systemd-logind[1450]: Removed session 3. Oct 31 00:43:24.347105 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 39434 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:24.348939 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:24.353719 systemd-logind[1450]: New session 4 of user core. Oct 31 00:43:24.364228 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 00:43:24.426026 sshd[1592]: pam_unix(sshd:session): session closed for user core Oct 31 00:43:24.434679 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:39434.service: Deactivated successfully. Oct 31 00:43:24.437438 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 00:43:24.443396 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Oct 31 00:43:24.458290 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:39442.service - OpenSSH per-connection server daemon (10.0.0.1:39442). Oct 31 00:43:24.459324 systemd-logind[1450]: Removed session 4. Oct 31 00:43:24.491476 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 39442 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:24.492995 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:24.496857 systemd-logind[1450]: New session 5 of user core. Oct 31 00:43:24.512154 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 00:43:24.578916 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 00:43:24.579297 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:43:24.598965 sudo[1602]: pam_unix(sudo:session): session closed for user root Oct 31 00:43:24.600791 sshd[1599]: pam_unix(sshd:session): session closed for user core Oct 31 00:43:24.611199 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:39442.service: Deactivated successfully. Oct 31 00:43:24.613633 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 00:43:24.616090 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Oct 31 00:43:24.617678 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:39454.service - OpenSSH per-connection server daemon (10.0.0.1:39454). Oct 31 00:43:24.618558 systemd-logind[1450]: Removed session 5. Oct 31 00:43:24.667723 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 39454 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:24.669475 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:24.673320 systemd-logind[1450]: New session 6 of user core. Oct 31 00:43:24.683052 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 00:43:24.737504 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 00:43:24.737852 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:43:24.741694 sudo[1611]: pam_unix(sudo:session): session closed for user root Oct 31 00:43:24.748089 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 00:43:24.748424 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:43:24.767146 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 00:43:24.769487 auditctl[1614]: No rules Oct 31 00:43:24.770864 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 00:43:24.771153 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 00:43:24.772962 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 00:43:24.805401 augenrules[1632]: No rules Oct 31 00:43:24.807266 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 00:43:24.808541 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 31 00:43:24.810486 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 31 00:43:24.821953 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:39454.service: Deactivated successfully. Oct 31 00:43:24.824009 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 00:43:24.825899 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Oct 31 00:43:24.837264 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:39470.service - OpenSSH per-connection server daemon (10.0.0.1:39470). Oct 31 00:43:24.838183 systemd-logind[1450]: Removed session 6. Oct 31 00:43:24.869601 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 39470 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:43:24.871162 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:43:24.874885 systemd-logind[1450]: New session 7 of user core. Oct 31 00:43:24.888076 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 00:43:24.942113 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 00:43:24.942462 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 00:43:25.399151 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 00:43:25.400873 (dockerd)[1662]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 00:43:26.085887 dockerd[1662]: time="2025-10-31T00:43:26.085806607Z" level=info msg="Starting up" Oct 31 00:43:28.515682 dockerd[1662]: time="2025-10-31T00:43:28.515580601Z" level=info msg="Loading containers: start." Oct 31 00:43:28.898969 kernel: Initializing XFRM netlink socket Oct 31 00:43:29.063022 systemd-networkd[1409]: docker0: Link UP Oct 31 00:43:29.100398 dockerd[1662]: time="2025-10-31T00:43:29.100331678Z" level=info msg="Loading containers: done." Oct 31 00:43:29.118023 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1571204522-merged.mount: Deactivated successfully. Oct 31 00:43:29.120073 dockerd[1662]: time="2025-10-31T00:43:29.120026913Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 00:43:29.120175 dockerd[1662]: time="2025-10-31T00:43:29.120150295Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 00:43:29.120356 dockerd[1662]: time="2025-10-31T00:43:29.120324221Z" level=info msg="Daemon has completed initialization" Oct 31 00:43:29.168616 dockerd[1662]: time="2025-10-31T00:43:29.167037134Z" level=info msg="API listen on /run/docker.sock" Oct 31 00:43:29.167592 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 00:43:29.965751 containerd[1476]: time="2025-10-31T00:43:29.965701395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 00:43:30.504919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819733814.mount: Deactivated successfully. Oct 31 00:43:31.855644 containerd[1476]: time="2025-10-31T00:43:31.855560996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:31.856255 containerd[1476]: time="2025-10-31T00:43:31.856179546Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 31 00:43:31.857467 containerd[1476]: time="2025-10-31T00:43:31.857429350Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:31.860404 containerd[1476]: time="2025-10-31T00:43:31.860366859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:31.861747 containerd[1476]: time="2025-10-31T00:43:31.861683148Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.89593217s" Oct 31 00:43:31.861747 containerd[1476]: time="2025-10-31T00:43:31.861732791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 00:43:31.862779 containerd[1476]: time="2025-10-31T00:43:31.862752022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 00:43:32.901503 containerd[1476]: time="2025-10-31T00:43:32.901436386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:32.902232 containerd[1476]: time="2025-10-31T00:43:32.902199858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 31 00:43:32.903605 containerd[1476]: time="2025-10-31T00:43:32.903553316Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:32.906700 containerd[1476]: time="2025-10-31T00:43:32.906643742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:32.908060 containerd[1476]: time="2025-10-31T00:43:32.908011747Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.045223908s" Oct 31 00:43:32.908060 containerd[1476]: time="2025-10-31T00:43:32.908055780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 00:43:32.908697 containerd[1476]: time="2025-10-31T00:43:32.908652679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 00:43:34.065573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 00:43:34.075138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:34.335015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:34.340642 (kubelet)[1885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:43:34.541367 kubelet[1885]: E1031 00:43:34.541283 1885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:43:34.549020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:43:34.549271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:43:35.081881 containerd[1476]: time="2025-10-31T00:43:35.081796479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:35.083497 containerd[1476]: time="2025-10-31T00:43:35.083387303Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 31 00:43:35.084547 containerd[1476]: time="2025-10-31T00:43:35.084497805Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:35.088341 containerd[1476]: time="2025-10-31T00:43:35.088297030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:35.089741 containerd[1476]: time="2025-10-31T00:43:35.089686636Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 2.180984775s" Oct 31 00:43:35.089798 containerd[1476]: time="2025-10-31T00:43:35.089745587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 00:43:35.090813 containerd[1476]: time="2025-10-31T00:43:35.090786729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 00:43:37.529461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006594273.mount: Deactivated successfully. Oct 31 00:43:37.847946 containerd[1476]: time="2025-10-31T00:43:37.847856840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:37.848657 containerd[1476]: time="2025-10-31T00:43:37.848603110Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 31 00:43:37.850130 containerd[1476]: time="2025-10-31T00:43:37.850055543Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:37.852410 containerd[1476]: time="2025-10-31T00:43:37.852361007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:37.853197 containerd[1476]: time="2025-10-31T00:43:37.853151499Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.762333952s" Oct 31 00:43:37.853197 containerd[1476]: time="2025-10-31T00:43:37.853185453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 00:43:37.854276 containerd[1476]: time="2025-10-31T00:43:37.854252384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 00:43:38.333637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3932769306.mount: Deactivated successfully. Oct 31 00:43:39.682900 containerd[1476]: time="2025-10-31T00:43:39.682810673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:39.684080 containerd[1476]: time="2025-10-31T00:43:39.683986248Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 31 00:43:39.685601 containerd[1476]: time="2025-10-31T00:43:39.685550351Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:39.689193 containerd[1476]: time="2025-10-31T00:43:39.689155612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:39.690374 containerd[1476]: time="2025-10-31T00:43:39.690314576Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.836031735s" Oct 31 00:43:39.690374 containerd[1476]: time="2025-10-31T00:43:39.690362917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 00:43:39.691068 containerd[1476]: time="2025-10-31T00:43:39.691026752Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 00:43:40.135679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741433845.mount: Deactivated successfully. Oct 31 00:43:40.141632 containerd[1476]: time="2025-10-31T00:43:40.141575742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:40.142342 containerd[1476]: time="2025-10-31T00:43:40.142291043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 31 00:43:40.143567 containerd[1476]: time="2025-10-31T00:43:40.143525448Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:40.145949 containerd[1476]: time="2025-10-31T00:43:40.145870155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:40.147051 containerd[1476]: time="2025-10-31T00:43:40.146988412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 455.926855ms" Oct 31 00:43:40.147051 containerd[1476]: time="2025-10-31T00:43:40.147032896Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 00:43:40.147865 containerd[1476]: time="2025-10-31T00:43:40.147759759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 00:43:43.289011 containerd[1476]: time="2025-10-31T00:43:43.288885751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:43.289976 containerd[1476]: time="2025-10-31T00:43:43.289902679Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 31 00:43:43.291302 containerd[1476]: time="2025-10-31T00:43:43.291253882Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:43.294757 containerd[1476]: time="2025-10-31T00:43:43.294710034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:43:43.296348 containerd[1476]: time="2025-10-31T00:43:43.296292772Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.148466379s" Oct 31 00:43:43.296417 containerd[1476]: time="2025-10-31T00:43:43.296348497Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 00:43:44.565658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 00:43:44.575244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:44.751811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:44.756907 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 00:43:44.889213 kubelet[2030]: E1031 00:43:44.889039 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 00:43:44.893767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 00:43:44.894020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 00:43:46.495603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:46.514165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:46.543355 systemd[1]: Reloading requested from client PID 2045 ('systemctl') (unit session-7.scope)... Oct 31 00:43:46.543372 systemd[1]: Reloading... Oct 31 00:43:46.652534 zram_generator::config[2090]: No configuration found. Oct 31 00:43:47.087214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:43:47.168055 systemd[1]: Reloading finished in 624 ms. Oct 31 00:43:47.279685 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 00:43:47.279841 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 00:43:47.280270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:47.292253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:47.464259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:47.485254 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:43:47.531680 kubelet[2131]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:43:47.531680 kubelet[2131]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:43:47.531680 kubelet[2131]: I1031 00:43:47.530679 2131 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:43:48.115743 kubelet[2131]: I1031 00:43:48.115683 2131 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:43:48.115743 kubelet[2131]: I1031 00:43:48.115721 2131 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:43:48.115743 kubelet[2131]: I1031 00:43:48.115763 2131 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:43:48.116016 kubelet[2131]: I1031 00:43:48.115776 2131 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:43:48.116118 kubelet[2131]: I1031 00:43:48.116096 2131 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:43:48.843072 kubelet[2131]: E1031 00:43:48.843019 2131 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 00:43:48.843609 kubelet[2131]: I1031 00:43:48.843360 2131 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:43:48.848667 kubelet[2131]: E1031 00:43:48.848608 2131 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:43:48.848667 kubelet[2131]: I1031 00:43:48.848669 2131 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:43:48.855153 kubelet[2131]: I1031 00:43:48.855129 2131 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:43:48.856002 kubelet[2131]: I1031 00:43:48.855976 2131 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:43:48.856177 kubelet[2131]: I1031 00:43:48.856004 2131 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:43:48.856281 kubelet[2131]: I1031 00:43:48.856198 2131 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:43:48.856281 kubelet[2131]: I1031 00:43:48.856211 2131 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:43:48.856383 kubelet[2131]: I1031 00:43:48.856369 2131 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:43:48.860530 kubelet[2131]: I1031 00:43:48.860493 2131 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:43:48.862118 kubelet[2131]: I1031 00:43:48.862092 2131 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:43:48.862168 kubelet[2131]: I1031 00:43:48.862128 2131 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:43:48.862202 kubelet[2131]: I1031 00:43:48.862175 2131 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:43:48.862227 kubelet[2131]: I1031 00:43:48.862211 2131 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:43:48.863119 kubelet[2131]: E1031 00:43:48.863042 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:43:48.863119 kubelet[2131]: E1031 00:43:48.863054 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:43:48.866969 kubelet[2131]: I1031 00:43:48.864544 2131 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:43:48.866969 kubelet[2131]: I1031 00:43:48.865262 2131 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:43:48.866969 kubelet[2131]: I1031 00:43:48.865297 2131 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:43:48.866969 kubelet[2131]: W1031 00:43:48.865424 2131 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 00:43:48.869741 kubelet[2131]: I1031 00:43:48.869709 2131 server.go:1262] "Started kubelet" Oct 31 00:43:48.869857 kubelet[2131]: I1031 00:43:48.869816 2131 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:43:48.870123 kubelet[2131]: I1031 00:43:48.870090 2131 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:43:48.870175 kubelet[2131]: I1031 00:43:48.870135 2131 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:43:48.870585 kubelet[2131]: I1031 00:43:48.870552 2131 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:43:48.870997 kubelet[2131]: I1031 00:43:48.870959 2131 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:43:48.873598 kubelet[2131]: I1031 00:43:48.873548 2131 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:43:48.877245 kubelet[2131]: I1031 00:43:48.877203 2131 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:43:48.879106 kubelet[2131]: I1031 00:43:48.878157 2131 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:43:48.879106 kubelet[2131]: I1031 00:43:48.878738 2131 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:43:48.879106 kubelet[2131]: I1031 00:43:48.878854 2131 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:43:48.879636 kubelet[2131]: E1031 00:43:48.879567 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:43:48.879636 kubelet[2131]: E1031 00:43:48.879611 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:48.880368 kubelet[2131]: I1031 00:43:48.880335 2131 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:43:48.880554 kubelet[2131]: I1031 00:43:48.880505 2131 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:43:48.880650 kubelet[2131]: E1031 00:43:48.880556 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Oct 31 00:43:48.883993 kubelet[2131]: I1031 00:43:48.883897 2131 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:43:48.885779 kubelet[2131]: E1031 00:43:48.883361 2131 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736cbad1af8ef4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 00:43:48.869656308 +0000 UTC m=+1.380113867,LastTimestamp:2025-10-31 00:43:48.869656308 +0000 UTC m=+1.380113867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 00:43:48.902364 kubelet[2131]: I1031 00:43:48.902090 2131 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:43:48.902364 kubelet[2131]: I1031 00:43:48.902110 2131 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:43:48.902364 kubelet[2131]: I1031 00:43:48.902131 2131 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:43:48.905013 kubelet[2131]: I1031 00:43:48.904994 2131 policy_none.go:49] "None policy: Start" Oct 31 00:43:48.905108 kubelet[2131]: I1031 00:43:48.905095 2131 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:43:48.905171 kubelet[2131]: I1031 00:43:48.905159 2131 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:43:48.906353 kubelet[2131]: I1031 00:43:48.906324 2131 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:43:48.908229 kubelet[2131]: I1031 00:43:48.908179 2131 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:43:48.908229 kubelet[2131]: I1031 00:43:48.908225 2131 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:43:48.908408 kubelet[2131]: I1031 00:43:48.908261 2131 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:43:48.908408 kubelet[2131]: E1031 00:43:48.908329 2131 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:43:48.909151 kubelet[2131]: E1031 00:43:48.909123 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:43:48.909703 kubelet[2131]: I1031 00:43:48.909649 2131 policy_none.go:47] "Start" Oct 31 00:43:48.915312 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 00:43:48.931283 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 00:43:48.934729 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 00:43:48.947975 kubelet[2131]: E1031 00:43:48.947937 2131 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:43:48.948503 kubelet[2131]: I1031 00:43:48.948230 2131 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:43:48.948503 kubelet[2131]: I1031 00:43:48.948252 2131 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:43:48.948860 kubelet[2131]: I1031 00:43:48.948832 2131 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:43:48.949559 kubelet[2131]: E1031 00:43:48.949525 2131 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:43:48.949601 kubelet[2131]: E1031 00:43:48.949593 2131 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 00:43:49.051161 kubelet[2131]: I1031 00:43:49.051103 2131 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:43:49.051662 kubelet[2131]: E1031 00:43:49.051609 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 31 00:43:49.059234 systemd[1]: Created slice kubepods-burstable-pode77551c38618623fadb3aa91a740c1ce.slice - libcontainer container kubepods-burstable-pode77551c38618623fadb3aa91a740c1ce.slice. Oct 31 00:43:49.071017 kubelet[2131]: E1031 00:43:49.070968 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:49.075175 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 00:43:49.077291 kubelet[2131]: E1031 00:43:49.077224 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:49.078987 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 00:43:49.079710 kubelet[2131]: I1031 00:43:49.079672 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:49.079710 kubelet[2131]: I1031 00:43:49.079702 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:49.079822 kubelet[2131]: I1031 00:43:49.079722 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:49.079822 kubelet[2131]: I1031 00:43:49.079744 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:49.079822 kubelet[2131]: I1031 00:43:49.079758 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:49.079822 kubelet[2131]: I1031 00:43:49.079777 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:49.079822 kubelet[2131]: I1031 00:43:49.079812 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:49.079951 kubelet[2131]: I1031 00:43:49.079826 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:49.079951 kubelet[2131]: I1031 00:43:49.079846 2131 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:49.080800 kubelet[2131]: E1031 00:43:49.080775 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:49.081111 kubelet[2131]: E1031 00:43:49.081074 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Oct 31 00:43:49.254301 kubelet[2131]: I1031 00:43:49.254130 2131 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:43:49.254681 kubelet[2131]: E1031 00:43:49.254622 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 31 00:43:49.379755 kubelet[2131]: E1031 00:43:49.379680 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:49.380938 containerd[1476]: time="2025-10-31T00:43:49.380846232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e77551c38618623fadb3aa91a740c1ce,Namespace:kube-system,Attempt:0,}" Oct 31 00:43:49.382599 kubelet[2131]: E1031 00:43:49.382566 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:49.383583 containerd[1476]: time="2025-10-31T00:43:49.383526398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 00:43:49.385198 kubelet[2131]: E1031 00:43:49.385177 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:49.385785 containerd[1476]: time="2025-10-31T00:43:49.385744438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 00:43:49.481811 kubelet[2131]: E1031 00:43:49.481757 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Oct 31 00:43:49.657237 kubelet[2131]: I1031 00:43:49.657174 2131 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:43:49.657802 kubelet[2131]: E1031 00:43:49.657754 2131 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Oct 31 00:43:49.897665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601758860.mount: Deactivated successfully. Oct 31 00:43:49.903320 containerd[1476]: time="2025-10-31T00:43:49.903253512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:43:49.904169 containerd[1476]: time="2025-10-31T00:43:49.904110629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 31 00:43:49.907025 containerd[1476]: time="2025-10-31T00:43:49.906985541Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:43:49.908501 containerd[1476]: time="2025-10-31T00:43:49.908421113Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:43:49.909157 containerd[1476]: time="2025-10-31T00:43:49.909118681Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:43:49.909901 containerd[1476]: time="2025-10-31T00:43:49.909834514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:43:49.911962 containerd[1476]: time="2025-10-31T00:43:49.911885049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 00:43:49.914562 containerd[1476]: time="2025-10-31T00:43:49.912517686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 00:43:49.915626 containerd[1476]: time="2025-10-31T00:43:49.915584677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.741855ms" Oct 31 00:43:49.917589 containerd[1476]: time="2025-10-31T00:43:49.917560933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.942172ms" Oct 31 00:43:49.920404 containerd[1476]: time="2025-10-31T00:43:49.920358510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.373799ms" Oct 31 00:43:49.986048 kubelet[2131]: E1031 00:43:49.985992 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 00:43:50.050733 containerd[1476]: time="2025-10-31T00:43:50.050629324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:43:50.050733 containerd[1476]: time="2025-10-31T00:43:50.050699616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:43:50.050733 containerd[1476]: time="2025-10-31T00:43:50.050716438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.050957 containerd[1476]: time="2025-10-31T00:43:50.050802840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.052254 containerd[1476]: time="2025-10-31T00:43:50.051941575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:43:50.052254 containerd[1476]: time="2025-10-31T00:43:50.052017427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:43:50.052254 containerd[1476]: time="2025-10-31T00:43:50.052037375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.052254 containerd[1476]: time="2025-10-31T00:43:50.052147531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.058447 containerd[1476]: time="2025-10-31T00:43:50.058145209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:43:50.058447 containerd[1476]: time="2025-10-31T00:43:50.058211183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:43:50.058447 containerd[1476]: time="2025-10-31T00:43:50.058230850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.058447 containerd[1476]: time="2025-10-31T00:43:50.058366164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:43:50.078117 systemd[1]: Started cri-containerd-c97d30df8c2fbae2950de9421046ea7708cf009be724c8f3578626d10ecb510b.scope - libcontainer container c97d30df8c2fbae2950de9421046ea7708cf009be724c8f3578626d10ecb510b. Oct 31 00:43:50.080457 systemd[1]: Started cri-containerd-e975aa16e0d60c16544e810603c71a444453d2b6fc1a374da0adb6967d5f8704.scope - libcontainer container e975aa16e0d60c16544e810603c71a444453d2b6fc1a374da0adb6967d5f8704. Oct 31 00:43:50.085944 systemd[1]: Started cri-containerd-0b06770dbb520da1eaefda6034ec444641bcbdf03bacea5301aad164892627e1.scope - libcontainer container 0b06770dbb520da1eaefda6034ec444641bcbdf03bacea5301aad164892627e1. Oct 31 00:43:50.124803 containerd[1476]: time="2025-10-31T00:43:50.124746009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e975aa16e0d60c16544e810603c71a444453d2b6fc1a374da0adb6967d5f8704\"" Oct 31 00:43:50.126801 kubelet[2131]: E1031 00:43:50.126734 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:50.129188 containerd[1476]: time="2025-10-31T00:43:50.129099153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c97d30df8c2fbae2950de9421046ea7708cf009be724c8f3578626d10ecb510b\"" Oct 31 00:43:50.129806 kubelet[2131]: E1031 00:43:50.129770 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:50.135629 containerd[1476]: time="2025-10-31T00:43:50.135592941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e77551c38618623fadb3aa91a740c1ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b06770dbb520da1eaefda6034ec444641bcbdf03bacea5301aad164892627e1\"" Oct 31 00:43:50.139776 kubelet[2131]: E1031 00:43:50.139592 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:50.142087 containerd[1476]: time="2025-10-31T00:43:50.142063817Z" level=info msg="CreateContainer within sandbox \"e975aa16e0d60c16544e810603c71a444453d2b6fc1a374da0adb6967d5f8704\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 00:43:50.144494 containerd[1476]: time="2025-10-31T00:43:50.144449691Z" level=info msg="CreateContainer within sandbox \"c97d30df8c2fbae2950de9421046ea7708cf009be724c8f3578626d10ecb510b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 00:43:50.147445 containerd[1476]: time="2025-10-31T00:43:50.147390676Z" level=info msg="CreateContainer within sandbox \"0b06770dbb520da1eaefda6034ec444641bcbdf03bacea5301aad164892627e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 00:43:50.163258 containerd[1476]: time="2025-10-31T00:43:50.163057417Z" level=info msg="CreateContainer within sandbox \"c97d30df8c2fbae2950de9421046ea7708cf009be724c8f3578626d10ecb510b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc3186798aee66853009790837e0820aa21082e702786417ac29810927361d3a\"" Oct 31 00:43:50.164406 containerd[1476]: time="2025-10-31T00:43:50.164373054Z" level=info msg="StartContainer for \"bc3186798aee66853009790837e0820aa21082e702786417ac29810927361d3a\"" Oct 31 00:43:50.169613 containerd[1476]: time="2025-10-31T00:43:50.169574539Z" level=info msg="CreateContainer within sandbox \"e975aa16e0d60c16544e810603c71a444453d2b6fc1a374da0adb6967d5f8704\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b70f6535cea24eb4ccadf8e4a012c9d71fd6905ce8b9807dae2b0c5c7d9086af\"" Oct 31 00:43:50.170104 containerd[1476]: time="2025-10-31T00:43:50.170072022Z" level=info msg="StartContainer for \"b70f6535cea24eb4ccadf8e4a012c9d71fd6905ce8b9807dae2b0c5c7d9086af\"" Oct 31 00:43:50.173870 containerd[1476]: time="2025-10-31T00:43:50.173736394Z" level=info msg="CreateContainer within sandbox \"0b06770dbb520da1eaefda6034ec444641bcbdf03bacea5301aad164892627e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b70cc1eb5cd03d2f8c9f60d4af441c299ae5c531251560b4f8d5b74640f9f0ae\"" Oct 31 00:43:50.174582 containerd[1476]: time="2025-10-31T00:43:50.174540011Z" level=info msg="StartContainer for \"b70cc1eb5cd03d2f8c9f60d4af441c299ae5c531251560b4f8d5b74640f9f0ae\"" Oct 31 00:43:50.196120 systemd[1]: Started cri-containerd-bc3186798aee66853009790837e0820aa21082e702786417ac29810927361d3a.scope - libcontainer container bc3186798aee66853009790837e0820aa21082e702786417ac29810927361d3a. Oct 31 00:43:50.197398 kubelet[2131]: E1031 00:43:50.197331 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 00:43:50.200424 systemd[1]: Started cri-containerd-b70f6535cea24eb4ccadf8e4a012c9d71fd6905ce8b9807dae2b0c5c7d9086af.scope - libcontainer container b70f6535cea24eb4ccadf8e4a012c9d71fd6905ce8b9807dae2b0c5c7d9086af. Oct 31 00:43:50.206118 systemd[1]: Started cri-containerd-b70cc1eb5cd03d2f8c9f60d4af441c299ae5c531251560b4f8d5b74640f9f0ae.scope - libcontainer container b70cc1eb5cd03d2f8c9f60d4af441c299ae5c531251560b4f8d5b74640f9f0ae. Oct 31 00:43:50.243410 kubelet[2131]: E1031 00:43:50.243351 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 00:43:50.250706 containerd[1476]: time="2025-10-31T00:43:50.250572619Z" level=info msg="StartContainer for \"bc3186798aee66853009790837e0820aa21082e702786417ac29810927361d3a\" returns successfully" Oct 31 00:43:50.256396 containerd[1476]: time="2025-10-31T00:43:50.256359321Z" level=info msg="StartContainer for \"b70cc1eb5cd03d2f8c9f60d4af441c299ae5c531251560b4f8d5b74640f9f0ae\" returns successfully" Oct 31 00:43:50.260932 containerd[1476]: time="2025-10-31T00:43:50.260705362Z" level=info msg="StartContainer for \"b70f6535cea24eb4ccadf8e4a012c9d71fd6905ce8b9807dae2b0c5c7d9086af\" returns successfully" Oct 31 00:43:50.283372 kubelet[2131]: E1031 00:43:50.283331 2131 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Oct 31 00:43:50.303195 kubelet[2131]: E1031 00:43:50.303159 2131 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 00:43:50.459692 kubelet[2131]: I1031 00:43:50.459579 2131 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:43:50.924403 kubelet[2131]: E1031 00:43:50.924369 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:50.924865 kubelet[2131]: E1031 00:43:50.924795 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:50.925766 kubelet[2131]: E1031 00:43:50.925731 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:50.925899 kubelet[2131]: E1031 00:43:50.925876 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:50.928200 kubelet[2131]: E1031 00:43:50.928177 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:50.928313 kubelet[2131]: E1031 00:43:50.928292 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:51.669950 kubelet[2131]: I1031 00:43:51.669879 2131 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:43:51.669950 kubelet[2131]: E1031 00:43:51.669947 2131 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 00:43:51.680459 kubelet[2131]: E1031 00:43:51.680315 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:51.781173 kubelet[2131]: E1031 00:43:51.781124 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:51.882307 kubelet[2131]: E1031 00:43:51.882252 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:51.929565 kubelet[2131]: E1031 00:43:51.929451 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:51.929657 kubelet[2131]: E1031 00:43:51.929578 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:51.929851 kubelet[2131]: E1031 00:43:51.929811 2131 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 00:43:51.930199 kubelet[2131]: E1031 00:43:51.930065 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:51.982996 kubelet[2131]: E1031 00:43:51.982851 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.083642 kubelet[2131]: E1031 00:43:52.083561 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.184722 kubelet[2131]: E1031 00:43:52.184558 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.285364 kubelet[2131]: E1031 00:43:52.285303 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.385708 kubelet[2131]: E1031 00:43:52.385630 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.485973 kubelet[2131]: E1031 00:43:52.485787 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.586567 kubelet[2131]: E1031 00:43:52.586507 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.686665 kubelet[2131]: E1031 00:43:52.686616 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.787469 kubelet[2131]: E1031 00:43:52.787312 2131 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 00:43:52.865363 kubelet[2131]: I1031 00:43:52.865316 2131 apiserver.go:52] "Watching apiserver" Oct 31 00:43:52.879414 kubelet[2131]: I1031 00:43:52.879364 2131 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:43:52.880494 kubelet[2131]: I1031 00:43:52.880447 2131 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:52.887764 kubelet[2131]: I1031 00:43:52.887732 2131 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:52.891555 kubelet[2131]: E1031 00:43:52.891506 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:52.891806 kubelet[2131]: I1031 00:43:52.891773 2131 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:52.930403 kubelet[2131]: E1031 00:43:52.930373 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:52.930669 kubelet[2131]: E1031 00:43:52.930645 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:53.414330 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-7.scope)... Oct 31 00:43:53.414348 systemd[1]: Reloading... Oct 31 00:43:53.500961 zram_generator::config[2463]: No configuration found. Oct 31 00:43:53.671054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 00:43:53.754997 kubelet[2131]: E1031 00:43:53.754956 2131 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:53.768374 systemd[1]: Reloading finished in 353 ms. Oct 31 00:43:53.813045 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:53.841480 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 00:43:53.841826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:53.841909 systemd[1]: kubelet.service: Consumed 1.237s CPU time, 126.9M memory peak, 0B memory swap peak. Oct 31 00:43:53.855419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 00:43:54.026836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 00:43:54.031800 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 00:43:54.076830 kubelet[2507]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 00:43:54.076830 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 00:43:54.077289 kubelet[2507]: I1031 00:43:54.076879 2507 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 00:43:54.083967 kubelet[2507]: I1031 00:43:54.083893 2507 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 00:43:54.083967 kubelet[2507]: I1031 00:43:54.083946 2507 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 00:43:54.084058 kubelet[2507]: I1031 00:43:54.083989 2507 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 00:43:54.084058 kubelet[2507]: I1031 00:43:54.083998 2507 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 00:43:54.084320 kubelet[2507]: I1031 00:43:54.084285 2507 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 00:43:54.085566 kubelet[2507]: I1031 00:43:54.085539 2507 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 00:43:54.087968 kubelet[2507]: I1031 00:43:54.087844 2507 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 00:43:54.093453 kubelet[2507]: E1031 00:43:54.093399 2507 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 00:43:54.093543 kubelet[2507]: I1031 00:43:54.093472 2507 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 00:43:54.098992 kubelet[2507]: I1031 00:43:54.098953 2507 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 00:43:54.099276 kubelet[2507]: I1031 00:43:54.099245 2507 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 00:43:54.099470 kubelet[2507]: I1031 00:43:54.099275 2507 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 00:43:54.099470 kubelet[2507]: I1031 00:43:54.099470 2507 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 00:43:54.099583 kubelet[2507]: I1031 00:43:54.099481 2507 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 00:43:54.099583 kubelet[2507]: I1031 00:43:54.099509 2507 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 00:43:54.100372 kubelet[2507]: I1031 00:43:54.100340 2507 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:43:54.100559 kubelet[2507]: I1031 00:43:54.100531 2507 kubelet.go:475] "Attempting to sync node with API server" Oct 31 00:43:54.100559 kubelet[2507]: I1031 00:43:54.100553 2507 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 00:43:54.100608 kubelet[2507]: I1031 00:43:54.100582 2507 kubelet.go:387] "Adding apiserver pod source" Oct 31 00:43:54.100608 kubelet[2507]: I1031 00:43:54.100606 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 00:43:54.104945 kubelet[2507]: I1031 00:43:54.104890 2507 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 00:43:54.105405 kubelet[2507]: I1031 00:43:54.105373 2507 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 00:43:54.105447 kubelet[2507]: I1031 00:43:54.105412 2507 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.108009 2507 server.go:1262] "Started kubelet" Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.108237 2507 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.108266 2507 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.108713 2507 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.108992 2507 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.109148 2507 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 00:43:54.110487 kubelet[2507]: I1031 00:43:54.109196 2507 server.go:310] "Adding debug handlers to kubelet server" Oct 31 00:43:54.116309 kubelet[2507]: I1031 00:43:54.116274 2507 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 00:43:54.118364 kubelet[2507]: E1031 00:43:54.118317 2507 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 00:43:54.119090 kubelet[2507]: I1031 00:43:54.119059 2507 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 00:43:54.119549 kubelet[2507]: I1031 00:43:54.119432 2507 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 00:43:54.119649 kubelet[2507]: I1031 00:43:54.119634 2507 reconciler.go:29] "Reconciler: start to sync state" Oct 31 00:43:54.125605 kubelet[2507]: I1031 00:43:54.124850 2507 factory.go:223] Registration of the systemd container factory successfully Oct 31 00:43:54.125605 kubelet[2507]: I1031 00:43:54.125285 2507 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 00:43:54.126630 kubelet[2507]: I1031 00:43:54.126598 2507 factory.go:223] Registration of the containerd container factory successfully Oct 31 00:43:54.133998 kubelet[2507]: I1031 00:43:54.133960 2507 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 00:43:54.135282 kubelet[2507]: I1031 00:43:54.135256 2507 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 00:43:54.135282 kubelet[2507]: I1031 00:43:54.135273 2507 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 00:43:54.135352 kubelet[2507]: I1031 00:43:54.135292 2507 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 00:43:54.135352 kubelet[2507]: E1031 00:43:54.135334 2507 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 00:43:54.168412 kubelet[2507]: I1031 00:43:54.168375 2507 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 00:43:54.168412 kubelet[2507]: I1031 00:43:54.168395 2507 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 00:43:54.168412 kubelet[2507]: I1031 00:43:54.168415 2507 state_mem.go:36] "Initialized new in-memory state store" Oct 31 00:43:54.168595 kubelet[2507]: I1031 00:43:54.168553 2507 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 00:43:54.168595 kubelet[2507]: I1031 00:43:54.168563 2507 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 00:43:54.168595 kubelet[2507]: I1031 00:43:54.168581 2507 policy_none.go:49] "None policy: Start" Oct 31 00:43:54.168595 kubelet[2507]: I1031 00:43:54.168591 2507 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 00:43:54.168681 kubelet[2507]: I1031 00:43:54.168601 2507 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 00:43:54.168703 kubelet[2507]: I1031 00:43:54.168683 2507 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 00:43:54.168703 kubelet[2507]: I1031 00:43:54.168692 2507 policy_none.go:47] "Start" Oct 31 00:43:54.173740 kubelet[2507]: E1031 00:43:54.173700 2507 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 00:43:54.173895 kubelet[2507]: I1031 00:43:54.173880 2507 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 00:43:54.173941 kubelet[2507]: I1031 00:43:54.173896 2507 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 00:43:54.174268 kubelet[2507]: I1031 00:43:54.174242 2507 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 00:43:54.176645 kubelet[2507]: E1031 00:43:54.175186 2507 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 00:43:54.236749 kubelet[2507]: I1031 00:43:54.236652 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:54.236749 kubelet[2507]: I1031 00:43:54.236681 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:54.237003 kubelet[2507]: I1031 00:43:54.236676 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.243133 kubelet[2507]: E1031 00:43:54.243082 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:54.243507 kubelet[2507]: E1031 00:43:54.243477 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:54.243896 kubelet[2507]: E1031 00:43:54.243850 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.281530 kubelet[2507]: I1031 00:43:54.281356 2507 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 00:43:54.288603 kubelet[2507]: I1031 00:43:54.288563 2507 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 00:43:54.288686 kubelet[2507]: I1031 00:43:54.288670 2507 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 00:43:54.320821 kubelet[2507]: I1031 00:43:54.320752 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:54.320821 kubelet[2507]: I1031 00:43:54.320807 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:54.320821 kubelet[2507]: I1031 00:43:54.320832 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.321082 kubelet[2507]: I1031 00:43:54.320851 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e77551c38618623fadb3aa91a740c1ce-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e77551c38618623fadb3aa91a740c1ce\") " pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:54.321082 kubelet[2507]: I1031 00:43:54.320867 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.321082 kubelet[2507]: I1031 00:43:54.320988 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.321082 kubelet[2507]: I1031 00:43:54.321041 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.321082 kubelet[2507]: I1031 00:43:54.321063 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:54.321199 kubelet[2507]: I1031 00:43:54.321086 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:54.544318 kubelet[2507]: E1031 00:43:54.544169 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:54.544318 kubelet[2507]: E1031 00:43:54.544204 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:54.544318 kubelet[2507]: E1031 00:43:54.544262 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:55.102207 kubelet[2507]: I1031 00:43:55.102148 2507 apiserver.go:52] "Watching apiserver" Oct 31 00:43:55.120105 kubelet[2507]: I1031 00:43:55.120005 2507 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 00:43:55.151955 kubelet[2507]: I1031 00:43:55.151892 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:55.152380 kubelet[2507]: I1031 00:43:55.152335 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:55.154179 kubelet[2507]: I1031 00:43:55.152586 2507 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:55.159974 kubelet[2507]: E1031 00:43:55.159278 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 00:43:55.159974 kubelet[2507]: E1031 00:43:55.159315 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 00:43:55.159974 kubelet[2507]: E1031 00:43:55.159495 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:55.159974 kubelet[2507]: E1031 00:43:55.159514 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:55.160379 kubelet[2507]: E1031 00:43:55.160346 2507 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 00:43:55.160710 kubelet[2507]: E1031 00:43:55.160680 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:55.178513 kubelet[2507]: I1031 00:43:55.178428 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.178400095 podStartE2EDuration="3.178400095s" podCreationTimestamp="2025-10-31 00:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:43:55.170037354 +0000 UTC m=+1.134168960" watchObservedRunningTime="2025-10-31 00:43:55.178400095 +0000 UTC m=+1.142531681" Oct 31 00:43:55.184982 kubelet[2507]: I1031 00:43:55.184917 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.18489556 podStartE2EDuration="3.18489556s" podCreationTimestamp="2025-10-31 00:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:43:55.178765826 +0000 UTC m=+1.142897412" watchObservedRunningTime="2025-10-31 00:43:55.18489556 +0000 UTC m=+1.149027146" Oct 31 00:43:56.154334 kubelet[2507]: E1031 00:43:56.154273 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:56.154334 kubelet[2507]: E1031 00:43:56.154337 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:43:56.154912 kubelet[2507]: E1031 00:43:56.154492 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:00.242331 kubelet[2507]: I1031 00:44:00.242290 2507 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 00:44:00.243069 containerd[1476]: time="2025-10-31T00:44:00.243032887Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 00:44:00.243394 kubelet[2507]: I1031 00:44:00.243191 2507 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 00:44:00.855195 kubelet[2507]: I1031 00:44:00.855116 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.855069827 podStartE2EDuration="8.855069827s" podCreationTimestamp="2025-10-31 00:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:43:55.185206286 +0000 UTC m=+1.149337872" watchObservedRunningTime="2025-10-31 00:44:00.855069827 +0000 UTC m=+6.819201413" Oct 31 00:44:00.864966 systemd[1]: Created slice kubepods-besteffort-pod482fc7cb_3536_4131_af8b_ed63ae9062d4.slice - libcontainer container kubepods-besteffort-pod482fc7cb_3536_4131_af8b_ed63ae9062d4.slice. Oct 31 00:44:00.960172 kubelet[2507]: I1031 00:44:00.960120 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/482fc7cb-3536-4131-af8b-ed63ae9062d4-kube-proxy\") pod \"kube-proxy-2b7sd\" (UID: \"482fc7cb-3536-4131-af8b-ed63ae9062d4\") " pod="kube-system/kube-proxy-2b7sd" Oct 31 00:44:00.960172 kubelet[2507]: I1031 00:44:00.960162 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/482fc7cb-3536-4131-af8b-ed63ae9062d4-lib-modules\") pod \"kube-proxy-2b7sd\" (UID: \"482fc7cb-3536-4131-af8b-ed63ae9062d4\") " pod="kube-system/kube-proxy-2b7sd" Oct 31 00:44:00.960172 kubelet[2507]: I1031 00:44:00.960178 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/482fc7cb-3536-4131-af8b-ed63ae9062d4-xtables-lock\") pod \"kube-proxy-2b7sd\" (UID: \"482fc7cb-3536-4131-af8b-ed63ae9062d4\") " pod="kube-system/kube-proxy-2b7sd" Oct 31 00:44:00.960172 kubelet[2507]: I1031 00:44:00.960197 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kwl\" (UniqueName: \"kubernetes.io/projected/482fc7cb-3536-4131-af8b-ed63ae9062d4-kube-api-access-g9kwl\") pod \"kube-proxy-2b7sd\" (UID: \"482fc7cb-3536-4131-af8b-ed63ae9062d4\") " pod="kube-system/kube-proxy-2b7sd" Oct 31 00:44:01.180468 kubelet[2507]: E1031 00:44:01.180308 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:01.181160 containerd[1476]: time="2025-10-31T00:44:01.180916314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2b7sd,Uid:482fc7cb-3536-4131-af8b-ed63ae9062d4,Namespace:kube-system,Attempt:0,}" Oct 31 00:44:01.204847 containerd[1476]: time="2025-10-31T00:44:01.204727563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:01.204847 containerd[1476]: time="2025-10-31T00:44:01.204806153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:01.205079 containerd[1476]: time="2025-10-31T00:44:01.204825289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:01.205079 containerd[1476]: time="2025-10-31T00:44:01.204999882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:01.209956 kubelet[2507]: E1031 00:44:01.209689 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:01.233145 systemd[1]: Started cri-containerd-9bb1d68bb49dab55a99d1d0c864d1fb248e4d235335be53f2f181fcef1a34e96.scope - libcontainer container 9bb1d68bb49dab55a99d1d0c864d1fb248e4d235335be53f2f181fcef1a34e96. Oct 31 00:44:01.261804 containerd[1476]: time="2025-10-31T00:44:01.261750962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2b7sd,Uid:482fc7cb-3536-4131-af8b-ed63ae9062d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bb1d68bb49dab55a99d1d0c864d1fb248e4d235335be53f2f181fcef1a34e96\"" Oct 31 00:44:01.264776 kubelet[2507]: E1031 00:44:01.264748 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:01.272099 containerd[1476]: time="2025-10-31T00:44:01.272059629Z" level=info msg="CreateContainer within sandbox \"9bb1d68bb49dab55a99d1d0c864d1fb248e4d235335be53f2f181fcef1a34e96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 00:44:01.290700 containerd[1476]: time="2025-10-31T00:44:01.290633703Z" level=info msg="CreateContainer within sandbox \"9bb1d68bb49dab55a99d1d0c864d1fb248e4d235335be53f2f181fcef1a34e96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"314812d83a9c084841bf57f997ff1837b54adc77bc68a234524b0763d7cc0c0d\"" Oct 31 00:44:01.291324 containerd[1476]: time="2025-10-31T00:44:01.291264042Z" level=info msg="StartContainer for \"314812d83a9c084841bf57f997ff1837b54adc77bc68a234524b0763d7cc0c0d\"" Oct 31 00:44:01.317064 systemd[1]: Started cri-containerd-314812d83a9c084841bf57f997ff1837b54adc77bc68a234524b0763d7cc0c0d.scope - libcontainer container 314812d83a9c084841bf57f997ff1837b54adc77bc68a234524b0763d7cc0c0d. Oct 31 00:44:01.358359 containerd[1476]: time="2025-10-31T00:44:01.358303300Z" level=info msg="StartContainer for \"314812d83a9c084841bf57f997ff1837b54adc77bc68a234524b0763d7cc0c0d\" returns successfully" Oct 31 00:44:01.364530 kubelet[2507]: I1031 00:44:01.364486 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9b9z\" (UniqueName: \"kubernetes.io/projected/c866f0c2-d6ef-4994-97f6-b5442778e1ef-kube-api-access-v9b9z\") pod \"tigera-operator-65cdcdfd6d-v6p6v\" (UID: \"c866f0c2-d6ef-4994-97f6-b5442778e1ef\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v6p6v" Oct 31 00:44:01.364530 kubelet[2507]: I1031 00:44:01.364529 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c866f0c2-d6ef-4994-97f6-b5442778e1ef-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-v6p6v\" (UID: \"c866f0c2-d6ef-4994-97f6-b5442778e1ef\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v6p6v" Oct 31 00:44:01.375238 systemd[1]: Created slice kubepods-besteffort-podc866f0c2_d6ef_4994_97f6_b5442778e1ef.slice - libcontainer container kubepods-besteffort-podc866f0c2_d6ef_4994_97f6_b5442778e1ef.slice. Oct 31 00:44:01.690107 containerd[1476]: time="2025-10-31T00:44:01.690055930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v6p6v,Uid:c866f0c2-d6ef-4994-97f6-b5442778e1ef,Namespace:tigera-operator,Attempt:0,}" Oct 31 00:44:01.716067 containerd[1476]: time="2025-10-31T00:44:01.715809014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:01.717365 containerd[1476]: time="2025-10-31T00:44:01.717079551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:01.717365 containerd[1476]: time="2025-10-31T00:44:01.717133945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:01.717565 containerd[1476]: time="2025-10-31T00:44:01.717299740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:01.748211 systemd[1]: Started cri-containerd-369fa9d72704ddf7f78796e05f131adaa0f1befd5a9cfe9a91e47426bad17caf.scope - libcontainer container 369fa9d72704ddf7f78796e05f131adaa0f1befd5a9cfe9a91e47426bad17caf. Oct 31 00:44:01.793026 containerd[1476]: time="2025-10-31T00:44:01.792978379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v6p6v,Uid:c866f0c2-d6ef-4994-97f6-b5442778e1ef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"369fa9d72704ddf7f78796e05f131adaa0f1befd5a9cfe9a91e47426bad17caf\"" Oct 31 00:44:01.796168 containerd[1476]: time="2025-10-31T00:44:01.796122090Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 00:44:01.900917 kubelet[2507]: E1031 00:44:01.900881 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:02.168952 kubelet[2507]: E1031 00:44:02.166389 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:02.168952 kubelet[2507]: E1031 00:44:02.166781 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:02.168952 kubelet[2507]: E1031 00:44:02.167082 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:02.192723 kubelet[2507]: I1031 00:44:02.192651 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2b7sd" podStartSLOduration=2.192630161 podStartE2EDuration="2.192630161s" podCreationTimestamp="2025-10-31 00:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:44:02.19241998 +0000 UTC m=+8.156551576" watchObservedRunningTime="2025-10-31 00:44:02.192630161 +0000 UTC m=+8.156761747" Oct 31 00:44:02.269444 kubelet[2507]: E1031 00:44:02.269404 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:02.922107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477311576.mount: Deactivated successfully. Oct 31 00:44:03.167644 kubelet[2507]: E1031 00:44:03.167600 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:03.168087 kubelet[2507]: E1031 00:44:03.168071 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:05.142649 containerd[1476]: time="2025-10-31T00:44:05.142536172Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:05.143701 containerd[1476]: time="2025-10-31T00:44:05.143230269Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 00:44:05.144323 containerd[1476]: time="2025-10-31T00:44:05.144288125Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:05.147647 containerd[1476]: time="2025-10-31T00:44:05.147607661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:05.148759 containerd[1476]: time="2025-10-31T00:44:05.148715202Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.352545481s" Oct 31 00:44:05.148803 containerd[1476]: time="2025-10-31T00:44:05.148772801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 00:44:05.153669 containerd[1476]: time="2025-10-31T00:44:05.153622809Z" level=info msg="CreateContainer within sandbox \"369fa9d72704ddf7f78796e05f131adaa0f1befd5a9cfe9a91e47426bad17caf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 00:44:05.167857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408766371.mount: Deactivated successfully. Oct 31 00:44:05.170496 containerd[1476]: time="2025-10-31T00:44:05.170450826Z" level=info msg="CreateContainer within sandbox \"369fa9d72704ddf7f78796e05f131adaa0f1befd5a9cfe9a91e47426bad17caf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df015488000c01c5c4d0f5b62d17ca582cf2ce57551c749c9ebcd10ebd9ddfef\"" Oct 31 00:44:05.171124 containerd[1476]: time="2025-10-31T00:44:05.171092212Z" level=info msg="StartContainer for \"df015488000c01c5c4d0f5b62d17ca582cf2ce57551c749c9ebcd10ebd9ddfef\"" Oct 31 00:44:05.212109 systemd[1]: Started cri-containerd-df015488000c01c5c4d0f5b62d17ca582cf2ce57551c749c9ebcd10ebd9ddfef.scope - libcontainer container df015488000c01c5c4d0f5b62d17ca582cf2ce57551c749c9ebcd10ebd9ddfef. Oct 31 00:44:05.281472 containerd[1476]: time="2025-10-31T00:44:05.281411196Z" level=info msg="StartContainer for \"df015488000c01c5c4d0f5b62d17ca582cf2ce57551c749c9ebcd10ebd9ddfef\" returns successfully" Oct 31 00:44:06.155224 update_engine[1452]: I20251031 00:44:06.155133 1452 update_attempter.cc:509] Updating boot flags... Oct 31 00:44:06.195052 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2871) Oct 31 00:44:06.195559 kubelet[2507]: I1031 00:44:06.190457 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-v6p6v" podStartSLOduration=1.836444666 podStartE2EDuration="5.190402148s" podCreationTimestamp="2025-10-31 00:44:01 +0000 UTC" firstStartedPulling="2025-10-31 00:44:01.795719534 +0000 UTC m=+7.759851120" lastFinishedPulling="2025-10-31 00:44:05.149677026 +0000 UTC m=+11.113808602" observedRunningTime="2025-10-31 00:44:06.190238006 +0000 UTC m=+12.154369592" watchObservedRunningTime="2025-10-31 00:44:06.190402148 +0000 UTC m=+12.154533734" Oct 31 00:44:06.266982 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2871) Oct 31 00:44:06.300949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2871) Oct 31 00:44:09.356864 sudo[1643]: pam_unix(sudo:session): session closed for user root Oct 31 00:44:09.362045 sshd[1640]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:09.369123 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Oct 31 00:44:09.370133 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:39470.service: Deactivated successfully. Oct 31 00:44:09.372980 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 00:44:09.373236 systemd[1]: session-7.scope: Consumed 6.448s CPU time, 162.1M memory peak, 0B memory swap peak. Oct 31 00:44:09.374686 systemd-logind[1450]: Removed session 7. Oct 31 00:44:13.702518 systemd[1]: Created slice kubepods-besteffort-pod2b9cf561_0cf9_4752_8714_fb25c23b9dd9.slice - libcontainer container kubepods-besteffort-pod2b9cf561_0cf9_4752_8714_fb25c23b9dd9.slice. Oct 31 00:44:13.781112 systemd[1]: Created slice kubepods-besteffort-pod15b3eeca_c9c2_4128_8497_516ebc53c52b.slice - libcontainer container kubepods-besteffort-pod15b3eeca_c9c2_4128_8497_516ebc53c52b.slice. Oct 31 00:44:13.819877 kubelet[2507]: I1031 00:44:13.819806 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-cni-log-dir\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.819877 kubelet[2507]: I1031 00:44:13.819860 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-xtables-lock\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820586 kubelet[2507]: I1031 00:44:13.819944 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/15b3eeca-c9c2-4128-8497-516ebc53c52b-node-certs\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820586 kubelet[2507]: I1031 00:44:13.819983 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvkr\" (UniqueName: \"kubernetes.io/projected/15b3eeca-c9c2-4128-8497-516ebc53c52b-kube-api-access-bnvkr\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820586 kubelet[2507]: I1031 00:44:13.820011 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-var-lib-calico\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820586 kubelet[2507]: I1031 00:44:13.820040 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2b9cf561-0cf9-4752-8714-fb25c23b9dd9-typha-certs\") pod \"calico-typha-555fc58c55-j26rg\" (UID: \"2b9cf561-0cf9-4752-8714-fb25c23b9dd9\") " pod="calico-system/calico-typha-555fc58c55-j26rg" Oct 31 00:44:13.820586 kubelet[2507]: I1031 00:44:13.820057 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-cni-bin-dir\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820726 kubelet[2507]: I1031 00:44:13.820075 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b3eeca-c9c2-4128-8497-516ebc53c52b-tigera-ca-bundle\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820726 kubelet[2507]: I1031 00:44:13.820091 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-cni-net-dir\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820726 kubelet[2507]: I1031 00:44:13.820114 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-lib-modules\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820726 kubelet[2507]: I1031 00:44:13.820137 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-var-run-calico\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820726 kubelet[2507]: I1031 00:44:13.820176 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b9cf561-0cf9-4752-8714-fb25c23b9dd9-tigera-ca-bundle\") pod \"calico-typha-555fc58c55-j26rg\" (UID: \"2b9cf561-0cf9-4752-8714-fb25c23b9dd9\") " pod="calico-system/calico-typha-555fc58c55-j26rg" Oct 31 00:44:13.820858 kubelet[2507]: I1031 00:44:13.820207 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jh8c\" (UniqueName: \"kubernetes.io/projected/2b9cf561-0cf9-4752-8714-fb25c23b9dd9-kube-api-access-7jh8c\") pod \"calico-typha-555fc58c55-j26rg\" (UID: \"2b9cf561-0cf9-4752-8714-fb25c23b9dd9\") " pod="calico-system/calico-typha-555fc58c55-j26rg" Oct 31 00:44:13.820858 kubelet[2507]: I1031 00:44:13.820357 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-flexvol-driver-host\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.820858 kubelet[2507]: I1031 00:44:13.820433 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/15b3eeca-c9c2-4128-8497-516ebc53c52b-policysync\") pod \"calico-node-tst7g\" (UID: \"15b3eeca-c9c2-4128-8497-516ebc53c52b\") " pod="calico-system/calico-node-tst7g" Oct 31 00:44:13.930986 kubelet[2507]: E1031 00:44:13.929275 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:13.931904 kubelet[2507]: E1031 00:44:13.931435 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.931904 kubelet[2507]: W1031 00:44:13.931483 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.931904 kubelet[2507]: E1031 00:44:13.931527 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.933848 kubelet[2507]: E1031 00:44:13.933827 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.933939 kubelet[2507]: W1031 00:44:13.933908 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.934010 kubelet[2507]: E1031 00:44:13.933998 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.934393 kubelet[2507]: E1031 00:44:13.934379 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.934455 kubelet[2507]: W1031 00:44:13.934444 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.934516 kubelet[2507]: E1031 00:44:13.934505 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.934792 kubelet[2507]: E1031 00:44:13.934778 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.934855 kubelet[2507]: W1031 00:44:13.934843 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.934904 kubelet[2507]: E1031 00:44:13.934893 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.935278 kubelet[2507]: E1031 00:44:13.935264 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.935348 kubelet[2507]: W1031 00:44:13.935336 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.935407 kubelet[2507]: E1031 00:44:13.935396 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.935793 kubelet[2507]: E1031 00:44:13.935780 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.935857 kubelet[2507]: W1031 00:44:13.935844 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.936063 kubelet[2507]: E1031 00:44:13.935905 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.936290 kubelet[2507]: E1031 00:44:13.936273 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.936362 kubelet[2507]: W1031 00:44:13.936349 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.936415 kubelet[2507]: E1031 00:44:13.936403 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.940542 kubelet[2507]: E1031 00:44:13.940524 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.940608 kubelet[2507]: W1031 00:44:13.940596 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.940680 kubelet[2507]: E1031 00:44:13.940668 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.957145 kubelet[2507]: E1031 00:44:13.957034 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.957145 kubelet[2507]: W1031 00:44:13.957066 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.957145 kubelet[2507]: E1031 00:44:13.957094 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.987915 kubelet[2507]: E1031 00:44:13.987760 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.987915 kubelet[2507]: W1031 00:44:13.987787 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.987915 kubelet[2507]: E1031 00:44:13.987812 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.988316 kubelet[2507]: E1031 00:44:13.988253 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.988316 kubelet[2507]: W1031 00:44:13.988265 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.988316 kubelet[2507]: E1031 00:44:13.988275 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.988715 kubelet[2507]: E1031 00:44:13.988652 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.988715 kubelet[2507]: W1031 00:44:13.988662 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.988715 kubelet[2507]: E1031 00:44:13.988672 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.991043 kubelet[2507]: E1031 00:44:13.990961 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.991043 kubelet[2507]: W1031 00:44:13.990974 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.991043 kubelet[2507]: E1031 00:44:13.990985 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.991434 kubelet[2507]: E1031 00:44:13.991372 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.991434 kubelet[2507]: W1031 00:44:13.991386 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.991434 kubelet[2507]: E1031 00:44:13.991397 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.991822 kubelet[2507]: E1031 00:44:13.991764 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.991822 kubelet[2507]: W1031 00:44:13.991775 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.991822 kubelet[2507]: E1031 00:44:13.991785 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.995132 kubelet[2507]: E1031 00:44:13.995093 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.995201 kubelet[2507]: W1031 00:44:13.995127 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.995228 kubelet[2507]: E1031 00:44:13.995164 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.996026 kubelet[2507]: E1031 00:44:13.996002 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.996026 kubelet[2507]: W1031 00:44:13.996021 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.996104 kubelet[2507]: E1031 00:44:13.996032 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.997024 kubelet[2507]: E1031 00:44:13.997002 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.997024 kubelet[2507]: W1031 00:44:13.997018 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.997104 kubelet[2507]: E1031 00:44:13.997029 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.997285 kubelet[2507]: E1031 00:44:13.997266 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.997285 kubelet[2507]: W1031 00:44:13.997280 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.997341 kubelet[2507]: E1031 00:44:13.997290 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.999059 kubelet[2507]: E1031 00:44:13.999005 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.999059 kubelet[2507]: W1031 00:44:13.999054 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.999143 kubelet[2507]: E1031 00:44:13.999066 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.999567 kubelet[2507]: E1031 00:44:13.999547 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.999567 kubelet[2507]: W1031 00:44:13.999562 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.999635 kubelet[2507]: E1031 00:44:13.999573 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:13.999888 kubelet[2507]: E1031 00:44:13.999848 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:13.999938 kubelet[2507]: W1031 00:44:13.999888 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:13.999963 kubelet[2507]: E1031 00:44:13.999951 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.000286 kubelet[2507]: E1031 00:44:14.000265 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.000286 kubelet[2507]: W1031 00:44:14.000280 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.000350 kubelet[2507]: E1031 00:44:14.000290 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.000525 kubelet[2507]: E1031 00:44:14.000503 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.000525 kubelet[2507]: W1031 00:44:14.000517 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.000525 kubelet[2507]: E1031 00:44:14.000526 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.001653 kubelet[2507]: E1031 00:44:14.001629 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.001653 kubelet[2507]: W1031 00:44:14.001648 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.001724 kubelet[2507]: E1031 00:44:14.001660 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.001899 kubelet[2507]: E1031 00:44:14.001878 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.001899 kubelet[2507]: W1031 00:44:14.001893 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.001973 kubelet[2507]: E1031 00:44:14.001903 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.002139 kubelet[2507]: E1031 00:44:14.002117 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.002139 kubelet[2507]: W1031 00:44:14.002133 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.002220 kubelet[2507]: E1031 00:44:14.002142 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.003025 kubelet[2507]: E1031 00:44:14.003002 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.003025 kubelet[2507]: W1031 00:44:14.003019 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.003095 kubelet[2507]: E1031 00:44:14.003029 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.003254 kubelet[2507]: E1031 00:44:14.003234 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.003254 kubelet[2507]: W1031 00:44:14.003248 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.003313 kubelet[2507]: E1031 00:44:14.003258 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.014886 kubelet[2507]: E1031 00:44:14.014842 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:14.015793 containerd[1476]: time="2025-10-31T00:44:14.015716228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555fc58c55-j26rg,Uid:2b9cf561-0cf9-4752-8714-fb25c23b9dd9,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:14.024862 kubelet[2507]: E1031 00:44:14.024687 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.024862 kubelet[2507]: W1031 00:44:14.024710 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.024862 kubelet[2507]: E1031 00:44:14.024733 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.024862 kubelet[2507]: I1031 00:44:14.024761 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ae5eae-434b-4353-bdcc-818b667dd4ed-kubelet-dir\") pod \"csi-node-driver-rgpjr\" (UID: \"51ae5eae-434b-4353-bdcc-818b667dd4ed\") " pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:14.025158 kubelet[2507]: E1031 00:44:14.025144 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.025386 kubelet[2507]: W1031 00:44:14.025244 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.025386 kubelet[2507]: E1031 00:44:14.025260 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.025386 kubelet[2507]: I1031 00:44:14.025284 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/51ae5eae-434b-4353-bdcc-818b667dd4ed-varrun\") pod \"csi-node-driver-rgpjr\" (UID: \"51ae5eae-434b-4353-bdcc-818b667dd4ed\") " pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:14.025539 kubelet[2507]: E1031 00:44:14.025526 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.025600 kubelet[2507]: W1031 00:44:14.025588 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.025656 kubelet[2507]: E1031 00:44:14.025645 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.025738 kubelet[2507]: I1031 00:44:14.025724 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m529h\" (UniqueName: \"kubernetes.io/projected/51ae5eae-434b-4353-bdcc-818b667dd4ed-kube-api-access-m529h\") pod \"csi-node-driver-rgpjr\" (UID: \"51ae5eae-434b-4353-bdcc-818b667dd4ed\") " pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:14.026174 kubelet[2507]: E1031 00:44:14.026143 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.026236 kubelet[2507]: W1031 00:44:14.026172 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.026236 kubelet[2507]: E1031 00:44:14.026206 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.026577 kubelet[2507]: E1031 00:44:14.026427 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.026577 kubelet[2507]: W1031 00:44:14.026442 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.026577 kubelet[2507]: E1031 00:44:14.026450 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.026804 kubelet[2507]: E1031 00:44:14.026775 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.026836 kubelet[2507]: W1031 00:44:14.026802 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.026836 kubelet[2507]: E1031 00:44:14.026832 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.027205 kubelet[2507]: E1031 00:44:14.027165 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.027496 kubelet[2507]: W1031 00:44:14.027387 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.027496 kubelet[2507]: E1031 00:44:14.027403 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.030001 kubelet[2507]: E1031 00:44:14.029978 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.030785 kubelet[2507]: W1031 00:44:14.030076 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.030785 kubelet[2507]: E1031 00:44:14.030105 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.030785 kubelet[2507]: I1031 00:44:14.030165 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51ae5eae-434b-4353-bdcc-818b667dd4ed-socket-dir\") pod \"csi-node-driver-rgpjr\" (UID: \"51ae5eae-434b-4353-bdcc-818b667dd4ed\") " pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:14.030785 kubelet[2507]: E1031 00:44:14.030532 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.030785 kubelet[2507]: W1031 00:44:14.030543 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.030785 kubelet[2507]: E1031 00:44:14.030554 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.031157 kubelet[2507]: E1031 00:44:14.030841 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.031157 kubelet[2507]: W1031 00:44:14.030850 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.031157 kubelet[2507]: E1031 00:44:14.030859 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.031157 kubelet[2507]: E1031 00:44:14.031127 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.031157 kubelet[2507]: W1031 00:44:14.031135 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.031157 kubelet[2507]: E1031 00:44:14.031144 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.031157 kubelet[2507]: I1031 00:44:14.031162 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51ae5eae-434b-4353-bdcc-818b667dd4ed-registration-dir\") pod \"csi-node-driver-rgpjr\" (UID: \"51ae5eae-434b-4353-bdcc-818b667dd4ed\") " pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:14.031810 kubelet[2507]: E1031 00:44:14.031431 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.031810 kubelet[2507]: W1031 00:44:14.031443 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.031810 kubelet[2507]: E1031 00:44:14.031452 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.031810 kubelet[2507]: E1031 00:44:14.031658 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.031810 kubelet[2507]: W1031 00:44:14.031667 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.031810 kubelet[2507]: E1031 00:44:14.031675 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.032156 kubelet[2507]: E1031 00:44:14.031868 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.032156 kubelet[2507]: W1031 00:44:14.031877 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.032156 kubelet[2507]: E1031 00:44:14.031885 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.032156 kubelet[2507]: E1031 00:44:14.032106 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.032156 kubelet[2507]: W1031 00:44:14.032114 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.032156 kubelet[2507]: E1031 00:44:14.032122 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.049002 containerd[1476]: time="2025-10-31T00:44:14.047594570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:14.049002 containerd[1476]: time="2025-10-31T00:44:14.047679761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:14.049002 containerd[1476]: time="2025-10-31T00:44:14.047690491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:14.049002 containerd[1476]: time="2025-10-31T00:44:14.047796601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:14.079101 systemd[1]: Started cri-containerd-2266bf090cc8e39a7a9090b5d7659b61c34f94203d3fd205d4043165abbc496f.scope - libcontainer container 2266bf090cc8e39a7a9090b5d7659b61c34f94203d3fd205d4043165abbc496f. Oct 31 00:44:14.087284 kubelet[2507]: E1031 00:44:14.087015 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:14.088462 containerd[1476]: time="2025-10-31T00:44:14.087902581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tst7g,Uid:15b3eeca-c9c2-4128-8497-516ebc53c52b,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:14.121793 containerd[1476]: time="2025-10-31T00:44:14.121686100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:14.121793 containerd[1476]: time="2025-10-31T00:44:14.121755210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:14.121793 containerd[1476]: time="2025-10-31T00:44:14.121766692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:14.122017 containerd[1476]: time="2025-10-31T00:44:14.121863615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:14.132614 kubelet[2507]: E1031 00:44:14.132583 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.132614 kubelet[2507]: W1031 00:44:14.132604 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.132614 kubelet[2507]: E1031 00:44:14.132626 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.133259 kubelet[2507]: E1031 00:44:14.133149 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.133259 kubelet[2507]: W1031 00:44:14.133166 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.133259 kubelet[2507]: E1031 00:44:14.133191 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.133728 kubelet[2507]: E1031 00:44:14.133710 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.133728 kubelet[2507]: W1031 00:44:14.133724 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.133801 kubelet[2507]: E1031 00:44:14.133734 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.134769 kubelet[2507]: E1031 00:44:14.134675 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.134769 kubelet[2507]: W1031 00:44:14.134691 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.134769 kubelet[2507]: E1031 00:44:14.134702 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.135124 kubelet[2507]: E1031 00:44:14.135085 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.135170 kubelet[2507]: W1031 00:44:14.135120 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.135170 kubelet[2507]: E1031 00:44:14.135154 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.135905 kubelet[2507]: E1031 00:44:14.135867 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.135905 kubelet[2507]: W1031 00:44:14.135891 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.135905 kubelet[2507]: E1031 00:44:14.135904 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.136356 kubelet[2507]: E1031 00:44:14.136333 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.136356 kubelet[2507]: W1031 00:44:14.136352 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.136422 kubelet[2507]: E1031 00:44:14.136364 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.136978 kubelet[2507]: E1031 00:44:14.136959 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.137055 kubelet[2507]: W1031 00:44:14.137006 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.137055 kubelet[2507]: E1031 00:44:14.137019 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.137443 kubelet[2507]: E1031 00:44:14.137415 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.137443 kubelet[2507]: W1031 00:44:14.137428 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.137443 kubelet[2507]: E1031 00:44:14.137438 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.137785 kubelet[2507]: E1031 00:44:14.137749 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.137785 kubelet[2507]: W1031 00:44:14.137762 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.137785 kubelet[2507]: E1031 00:44:14.137772 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.138127 kubelet[2507]: E1031 00:44:14.138108 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.138127 kubelet[2507]: W1031 00:44:14.138123 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.138237 kubelet[2507]: E1031 00:44:14.138135 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.138677 kubelet[2507]: E1031 00:44:14.138641 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.138677 kubelet[2507]: W1031 00:44:14.138657 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.138677 kubelet[2507]: E1031 00:44:14.138669 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.140082 kubelet[2507]: E1031 00:44:14.140050 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.140082 kubelet[2507]: W1031 00:44:14.140064 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.140082 kubelet[2507]: E1031 00:44:14.140075 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.140521 containerd[1476]: time="2025-10-31T00:44:14.140484577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555fc58c55-j26rg,Uid:2b9cf561-0cf9-4752-8714-fb25c23b9dd9,Namespace:calico-system,Attempt:0,} returns sandbox id \"2266bf090cc8e39a7a9090b5d7659b61c34f94203d3fd205d4043165abbc496f\"" Oct 31 00:44:14.140703 kubelet[2507]: E1031 00:44:14.140661 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.140703 kubelet[2507]: W1031 00:44:14.140676 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.140703 kubelet[2507]: E1031 00:44:14.140688 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.140980 kubelet[2507]: E1031 00:44:14.140964 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.140980 kubelet[2507]: W1031 00:44:14.140977 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.140980 kubelet[2507]: E1031 00:44:14.140988 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.142098 kubelet[2507]: E1031 00:44:14.142075 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.142098 kubelet[2507]: W1031 00:44:14.142092 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.142098 kubelet[2507]: E1031 00:44:14.142105 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.142429 kubelet[2507]: E1031 00:44:14.142408 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.142429 kubelet[2507]: W1031 00:44:14.142421 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.142429 kubelet[2507]: E1031 00:44:14.142431 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.142824 kubelet[2507]: E1031 00:44:14.142807 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.142824 kubelet[2507]: W1031 00:44:14.142821 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.142887 kubelet[2507]: E1031 00:44:14.142831 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.143426 kubelet[2507]: E1031 00:44:14.143409 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.143426 kubelet[2507]: W1031 00:44:14.143423 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.143522 kubelet[2507]: E1031 00:44:14.143434 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.143856 kubelet[2507]: E1031 00:44:14.143833 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:14.144130 kubelet[2507]: E1031 00:44:14.143786 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.144130 kubelet[2507]: W1031 00:44:14.144050 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.144130 kubelet[2507]: E1031 00:44:14.144066 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.144509 kubelet[2507]: E1031 00:44:14.144485 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.144509 kubelet[2507]: W1031 00:44:14.144502 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.144594 kubelet[2507]: E1031 00:44:14.144512 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.145079 kubelet[2507]: E1031 00:44:14.145059 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.145079 kubelet[2507]: W1031 00:44:14.145074 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.145243 kubelet[2507]: E1031 00:44:14.145086 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.145624 containerd[1476]: time="2025-10-31T00:44:14.145591205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 00:44:14.145975 kubelet[2507]: E1031 00:44:14.145954 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.145975 kubelet[2507]: W1031 00:44:14.145969 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.145975 kubelet[2507]: E1031 00:44:14.145981 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.146149 systemd[1]: Started cri-containerd-3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c.scope - libcontainer container 3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c. Oct 31 00:44:14.146768 kubelet[2507]: E1031 00:44:14.146352 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.146768 kubelet[2507]: W1031 00:44:14.146362 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.146768 kubelet[2507]: E1031 00:44:14.146373 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.147281 kubelet[2507]: E1031 00:44:14.147213 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.147281 kubelet[2507]: W1031 00:44:14.147227 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.147281 kubelet[2507]: E1031 00:44:14.147238 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.156639 kubelet[2507]: E1031 00:44:14.156604 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:14.156639 kubelet[2507]: W1031 00:44:14.156621 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:14.156639 kubelet[2507]: E1031 00:44:14.156638 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:14.173332 containerd[1476]: time="2025-10-31T00:44:14.173282495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tst7g,Uid:15b3eeca-c9c2-4128-8497-516ebc53c52b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\"" Oct 31 00:44:14.174085 kubelet[2507]: E1031 00:44:14.174064 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:16.137367 kubelet[2507]: E1031 00:44:16.137229 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:16.539575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707125493.mount: Deactivated successfully. Oct 31 00:44:16.903038 containerd[1476]: time="2025-10-31T00:44:16.902984023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:16.903767 containerd[1476]: time="2025-10-31T00:44:16.903714942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 00:44:16.904774 containerd[1476]: time="2025-10-31T00:44:16.904742430Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:16.906990 containerd[1476]: time="2025-10-31T00:44:16.906955062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:16.907668 containerd[1476]: time="2025-10-31T00:44:16.907629804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.76200194s" Oct 31 00:44:16.907668 containerd[1476]: time="2025-10-31T00:44:16.907660091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 00:44:16.909284 containerd[1476]: time="2025-10-31T00:44:16.909088465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 00:44:16.922469 containerd[1476]: time="2025-10-31T00:44:16.922420727Z" level=info msg="CreateContainer within sandbox \"2266bf090cc8e39a7a9090b5d7659b61c34f94203d3fd205d4043165abbc496f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 00:44:16.935078 containerd[1476]: time="2025-10-31T00:44:16.935000771Z" level=info msg="CreateContainer within sandbox \"2266bf090cc8e39a7a9090b5d7659b61c34f94203d3fd205d4043165abbc496f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6cf5b9c73ce77ec0457a5bc2f4f8f63209c1976d281bdd66ee1cc7bd3ba05251\"" Oct 31 00:44:16.935557 containerd[1476]: time="2025-10-31T00:44:16.935501796Z" level=info msg="StartContainer for \"6cf5b9c73ce77ec0457a5bc2f4f8f63209c1976d281bdd66ee1cc7bd3ba05251\"" Oct 31 00:44:16.969059 systemd[1]: Started cri-containerd-6cf5b9c73ce77ec0457a5bc2f4f8f63209c1976d281bdd66ee1cc7bd3ba05251.scope - libcontainer container 6cf5b9c73ce77ec0457a5bc2f4f8f63209c1976d281bdd66ee1cc7bd3ba05251. Oct 31 00:44:17.012405 containerd[1476]: time="2025-10-31T00:44:17.011874303Z" level=info msg="StartContainer for \"6cf5b9c73ce77ec0457a5bc2f4f8f63209c1976d281bdd66ee1cc7bd3ba05251\" returns successfully" Oct 31 00:44:17.217969 kubelet[2507]: E1031 00:44:17.217813 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:17.222599 kubelet[2507]: E1031 00:44:17.222573 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.222599 kubelet[2507]: W1031 00:44:17.222595 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.222716 kubelet[2507]: E1031 00:44:17.222616 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.222911 kubelet[2507]: E1031 00:44:17.222893 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.222911 kubelet[2507]: W1031 00:44:17.222908 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.222996 kubelet[2507]: E1031 00:44:17.222943 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.223231 kubelet[2507]: E1031 00:44:17.223213 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.223231 kubelet[2507]: W1031 00:44:17.223228 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.223312 kubelet[2507]: E1031 00:44:17.223241 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.223625 kubelet[2507]: E1031 00:44:17.223605 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.223625 kubelet[2507]: W1031 00:44:17.223623 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.223702 kubelet[2507]: E1031 00:44:17.223636 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.223965 kubelet[2507]: E1031 00:44:17.223914 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.223965 kubelet[2507]: W1031 00:44:17.223964 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.224060 kubelet[2507]: E1031 00:44:17.223977 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.224271 kubelet[2507]: E1031 00:44:17.224241 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.224271 kubelet[2507]: W1031 00:44:17.224257 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.224271 kubelet[2507]: E1031 00:44:17.224270 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.224560 kubelet[2507]: E1031 00:44:17.224541 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.224560 kubelet[2507]: W1031 00:44:17.224553 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.224657 kubelet[2507]: E1031 00:44:17.224566 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.224828 kubelet[2507]: E1031 00:44:17.224810 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.224828 kubelet[2507]: W1031 00:44:17.224822 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.224915 kubelet[2507]: E1031 00:44:17.224834 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.225144 kubelet[2507]: E1031 00:44:17.225114 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.225144 kubelet[2507]: W1031 00:44:17.225136 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.225238 kubelet[2507]: E1031 00:44:17.225148 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.225410 kubelet[2507]: E1031 00:44:17.225391 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.225410 kubelet[2507]: W1031 00:44:17.225405 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.225491 kubelet[2507]: E1031 00:44:17.225417 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.225801 kubelet[2507]: E1031 00:44:17.225770 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.225801 kubelet[2507]: W1031 00:44:17.225796 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.225902 kubelet[2507]: E1031 00:44:17.225810 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.226105 kubelet[2507]: E1031 00:44:17.226081 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.226105 kubelet[2507]: W1031 00:44:17.226098 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.226203 kubelet[2507]: E1031 00:44:17.226112 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.226888 kubelet[2507]: E1031 00:44:17.226848 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.226888 kubelet[2507]: W1031 00:44:17.226861 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.226888 kubelet[2507]: E1031 00:44:17.226872 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.227483 kubelet[2507]: I1031 00:44:17.227426 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-555fc58c55-j26rg" podStartSLOduration=1.463717091 podStartE2EDuration="4.227402538s" podCreationTimestamp="2025-10-31 00:44:13 +0000 UTC" firstStartedPulling="2025-10-31 00:44:14.144612788 +0000 UTC m=+20.108744374" lastFinishedPulling="2025-10-31 00:44:16.908298235 +0000 UTC m=+22.872429821" observedRunningTime="2025-10-31 00:44:17.227023183 +0000 UTC m=+23.191154769" watchObservedRunningTime="2025-10-31 00:44:17.227402538 +0000 UTC m=+23.191534124" Oct 31 00:44:17.228705 kubelet[2507]: E1031 00:44:17.228683 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.228705 kubelet[2507]: W1031 00:44:17.228697 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.228705 kubelet[2507]: E1031 00:44:17.228708 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.229035 kubelet[2507]: E1031 00:44:17.229014 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.229225 kubelet[2507]: W1031 00:44:17.229098 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.229225 kubelet[2507]: E1031 00:44:17.229135 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.258339 kubelet[2507]: E1031 00:44:17.258313 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.258339 kubelet[2507]: W1031 00:44:17.258343 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.258458 kubelet[2507]: E1031 00:44:17.258355 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.258650 kubelet[2507]: E1031 00:44:17.258629 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.258650 kubelet[2507]: W1031 00:44:17.258644 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.258710 kubelet[2507]: E1031 00:44:17.258655 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.258944 kubelet[2507]: E1031 00:44:17.258903 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.258944 kubelet[2507]: W1031 00:44:17.258941 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.259013 kubelet[2507]: E1031 00:44:17.258955 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.259226 kubelet[2507]: E1031 00:44:17.259204 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.259226 kubelet[2507]: W1031 00:44:17.259219 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.259289 kubelet[2507]: E1031 00:44:17.259230 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.259534 kubelet[2507]: E1031 00:44:17.259514 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.259534 kubelet[2507]: W1031 00:44:17.259526 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.259534 kubelet[2507]: E1031 00:44:17.259535 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.259747 kubelet[2507]: E1031 00:44:17.259729 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.259747 kubelet[2507]: W1031 00:44:17.259741 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.259747 kubelet[2507]: E1031 00:44:17.259749 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.260017 kubelet[2507]: E1031 00:44:17.259996 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.260017 kubelet[2507]: W1031 00:44:17.260011 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.260108 kubelet[2507]: E1031 00:44:17.260023 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.260256 kubelet[2507]: E1031 00:44:17.260237 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.260256 kubelet[2507]: W1031 00:44:17.260252 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.260325 kubelet[2507]: E1031 00:44:17.260265 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.260523 kubelet[2507]: E1031 00:44:17.260506 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.260523 kubelet[2507]: W1031 00:44:17.260517 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.260583 kubelet[2507]: E1031 00:44:17.260527 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.260733 kubelet[2507]: E1031 00:44:17.260718 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.260733 kubelet[2507]: W1031 00:44:17.260729 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.260775 kubelet[2507]: E1031 00:44:17.260738 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.260972 kubelet[2507]: E1031 00:44:17.260955 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.260972 kubelet[2507]: W1031 00:44:17.260968 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.261032 kubelet[2507]: E1031 00:44:17.260980 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.261236 kubelet[2507]: E1031 00:44:17.261217 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.261236 kubelet[2507]: W1031 00:44:17.261228 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.261236 kubelet[2507]: E1031 00:44:17.261237 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.261464 kubelet[2507]: E1031 00:44:17.261447 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.261464 kubelet[2507]: W1031 00:44:17.261458 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.261464 kubelet[2507]: E1031 00:44:17.261467 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.261836 kubelet[2507]: E1031 00:44:17.261815 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.261836 kubelet[2507]: W1031 00:44:17.261830 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.261917 kubelet[2507]: E1031 00:44:17.261842 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.262115 kubelet[2507]: E1031 00:44:17.262097 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.262115 kubelet[2507]: W1031 00:44:17.262112 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.262177 kubelet[2507]: E1031 00:44:17.262138 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.262416 kubelet[2507]: E1031 00:44:17.262385 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.262416 kubelet[2507]: W1031 00:44:17.262401 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.262416 kubelet[2507]: E1031 00:44:17.262412 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.262749 kubelet[2507]: E1031 00:44:17.262721 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.262749 kubelet[2507]: W1031 00:44:17.262741 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.262804 kubelet[2507]: E1031 00:44:17.262755 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:17.263024 kubelet[2507]: E1031 00:44:17.263004 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:17.263024 kubelet[2507]: W1031 00:44:17.263019 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:17.263118 kubelet[2507]: E1031 00:44:17.263031 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.136272 kubelet[2507]: E1031 00:44:18.136229 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:18.194251 containerd[1476]: time="2025-10-31T00:44:18.194193316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:18.194969 containerd[1476]: time="2025-10-31T00:44:18.194938731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 00:44:18.196163 containerd[1476]: time="2025-10-31T00:44:18.196119235Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:18.198469 containerd[1476]: time="2025-10-31T00:44:18.198430030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:18.199299 containerd[1476]: time="2025-10-31T00:44:18.199236028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.290110935s" Oct 31 00:44:18.199299 containerd[1476]: time="2025-10-31T00:44:18.199288227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 00:44:18.204571 containerd[1476]: time="2025-10-31T00:44:18.204539062Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 00:44:18.220036 kubelet[2507]: I1031 00:44:18.219580 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:44:18.220036 kubelet[2507]: E1031 00:44:18.219945 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:18.224909 containerd[1476]: time="2025-10-31T00:44:18.224840692Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71\"" Oct 31 00:44:18.225575 containerd[1476]: time="2025-10-31T00:44:18.225542875Z" level=info msg="StartContainer for \"7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71\"" Oct 31 00:44:18.236993 kubelet[2507]: E1031 00:44:18.236953 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.236993 kubelet[2507]: W1031 00:44:18.236994 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.237184 kubelet[2507]: E1031 00:44:18.237021 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.237263 kubelet[2507]: E1031 00:44:18.237250 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.237286 kubelet[2507]: W1031 00:44:18.237263 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.237286 kubelet[2507]: E1031 00:44:18.237273 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.237500 kubelet[2507]: E1031 00:44:18.237472 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.237500 kubelet[2507]: W1031 00:44:18.237487 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.237500 kubelet[2507]: E1031 00:44:18.237496 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.237720 kubelet[2507]: E1031 00:44:18.237702 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.237720 kubelet[2507]: W1031 00:44:18.237713 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.237762 kubelet[2507]: E1031 00:44:18.237722 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.237943 kubelet[2507]: E1031 00:44:18.237919 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.237943 kubelet[2507]: W1031 00:44:18.237942 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.238003 kubelet[2507]: E1031 00:44:18.237950 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.238187 kubelet[2507]: E1031 00:44:18.238169 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.238187 kubelet[2507]: W1031 00:44:18.238181 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.238249 kubelet[2507]: E1031 00:44:18.238192 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.238415 kubelet[2507]: E1031 00:44:18.238402 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.238415 kubelet[2507]: W1031 00:44:18.238413 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.238462 kubelet[2507]: E1031 00:44:18.238422 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.238624 kubelet[2507]: E1031 00:44:18.238612 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.238624 kubelet[2507]: W1031 00:44:18.238622 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.238664 kubelet[2507]: E1031 00:44:18.238630 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.238835 kubelet[2507]: E1031 00:44:18.238823 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.238856 kubelet[2507]: W1031 00:44:18.238833 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.238856 kubelet[2507]: E1031 00:44:18.238841 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.239047 kubelet[2507]: E1031 00:44:18.239035 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.239047 kubelet[2507]: W1031 00:44:18.239045 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.239103 kubelet[2507]: E1031 00:44:18.239054 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.239256 kubelet[2507]: E1031 00:44:18.239245 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.239256 kubelet[2507]: W1031 00:44:18.239255 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.239328 kubelet[2507]: E1031 00:44:18.239263 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.240133 kubelet[2507]: E1031 00:44:18.240095 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.240133 kubelet[2507]: W1031 00:44:18.240117 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.240133 kubelet[2507]: E1031 00:44:18.240128 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240327 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.242701 kubelet[2507]: W1031 00:44:18.240337 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240346 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240531 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.242701 kubelet[2507]: W1031 00:44:18.240539 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240547 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240731 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.242701 kubelet[2507]: W1031 00:44:18.240739 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.242701 kubelet[2507]: E1031 00:44:18.240747 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.266354 systemd[1]: Started cri-containerd-7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71.scope - libcontainer container 7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71. Oct 31 00:44:18.267805 kubelet[2507]: E1031 00:44:18.267456 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.267805 kubelet[2507]: W1031 00:44:18.267477 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.267805 kubelet[2507]: E1031 00:44:18.267496 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.267805 kubelet[2507]: E1031 00:44:18.267774 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.267805 kubelet[2507]: W1031 00:44:18.267783 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.267805 kubelet[2507]: E1031 00:44:18.267793 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.268297 kubelet[2507]: E1031 00:44:18.268137 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.268297 kubelet[2507]: W1031 00:44:18.268148 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.268297 kubelet[2507]: E1031 00:44:18.268160 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.268449 kubelet[2507]: E1031 00:44:18.268371 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.268449 kubelet[2507]: W1031 00:44:18.268379 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.268449 kubelet[2507]: E1031 00:44:18.268388 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.268631 kubelet[2507]: E1031 00:44:18.268609 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.268631 kubelet[2507]: W1031 00:44:18.268626 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.268732 kubelet[2507]: E1031 00:44:18.268635 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.268985 kubelet[2507]: E1031 00:44:18.268888 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.268985 kubelet[2507]: W1031 00:44:18.268947 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.268985 kubelet[2507]: E1031 00:44:18.268960 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.269274 kubelet[2507]: E1031 00:44:18.269260 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.269274 kubelet[2507]: W1031 00:44:18.269272 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.270138 kubelet[2507]: E1031 00:44:18.269282 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.270138 kubelet[2507]: E1031 00:44:18.269503 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.270138 kubelet[2507]: W1031 00:44:18.269512 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.270138 kubelet[2507]: E1031 00:44:18.269521 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.270138 kubelet[2507]: E1031 00:44:18.269729 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.270138 kubelet[2507]: W1031 00:44:18.269738 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.270138 kubelet[2507]: E1031 00:44:18.269747 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.270662 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.271665 kubelet[2507]: W1031 00:44:18.270698 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.270737 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.271201 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.271665 kubelet[2507]: W1031 00:44:18.271217 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.271235 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.271627 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.271665 kubelet[2507]: W1031 00:44:18.271637 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.271665 kubelet[2507]: E1031 00:44:18.271647 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.272038 kubelet[2507]: E1031 00:44:18.272019 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.272038 kubelet[2507]: W1031 00:44:18.272029 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.272038 kubelet[2507]: E1031 00:44:18.272039 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.272797 kubelet[2507]: E1031 00:44:18.272768 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.272840 kubelet[2507]: W1031 00:44:18.272801 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.272840 kubelet[2507]: E1031 00:44:18.272814 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.273210 kubelet[2507]: E1031 00:44:18.273181 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.273210 kubelet[2507]: W1031 00:44:18.273196 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.273210 kubelet[2507]: E1031 00:44:18.273206 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.273555 kubelet[2507]: E1031 00:44:18.273530 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.273555 kubelet[2507]: W1031 00:44:18.273543 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.273555 kubelet[2507]: E1031 00:44:18.273552 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.274075 kubelet[2507]: E1031 00:44:18.274050 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.274075 kubelet[2507]: W1031 00:44:18.274065 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.274075 kubelet[2507]: E1031 00:44:18.274075 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.279301 kubelet[2507]: E1031 00:44:18.278182 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 00:44:18.279301 kubelet[2507]: W1031 00:44:18.278201 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 00:44:18.279301 kubelet[2507]: E1031 00:44:18.278212 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 00:44:18.323025 systemd[1]: cri-containerd-7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71.scope: Deactivated successfully. Oct 31 00:44:18.602011 containerd[1476]: time="2025-10-31T00:44:18.601912482Z" level=info msg="StartContainer for \"7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71\" returns successfully" Oct 31 00:44:18.627546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71-rootfs.mount: Deactivated successfully. Oct 31 00:44:18.631335 containerd[1476]: time="2025-10-31T00:44:18.631245561Z" level=info msg="shim disconnected" id=7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71 namespace=k8s.io Oct 31 00:44:18.631335 containerd[1476]: time="2025-10-31T00:44:18.631327926Z" level=warning msg="cleaning up after shim disconnected" id=7e1a6dff05c5d19e8d03abbccda71afcca41a9b919b95ed1459be6ba89681d71 namespace=k8s.io Oct 31 00:44:18.631335 containerd[1476]: time="2025-10-31T00:44:18.631340230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:44:19.223490 kubelet[2507]: E1031 00:44:19.223435 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:19.224859 containerd[1476]: time="2025-10-31T00:44:19.224775498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 00:44:20.136006 kubelet[2507]: E1031 00:44:20.135896 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:22.137821 kubelet[2507]: E1031 00:44:22.137771 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:22.697266 containerd[1476]: time="2025-10-31T00:44:22.697191496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:22.698054 containerd[1476]: time="2025-10-31T00:44:22.697968068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 00:44:22.699213 containerd[1476]: time="2025-10-31T00:44:22.699174408Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:22.701731 containerd[1476]: time="2025-10-31T00:44:22.701689673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:22.702348 containerd[1476]: time="2025-10-31T00:44:22.702315943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.477458642s" Oct 31 00:44:22.702348 containerd[1476]: time="2025-10-31T00:44:22.702343084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 00:44:22.707792 containerd[1476]: time="2025-10-31T00:44:22.707753188Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 00:44:22.723701 containerd[1476]: time="2025-10-31T00:44:22.723650774Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a\"" Oct 31 00:44:22.724218 containerd[1476]: time="2025-10-31T00:44:22.724186102Z" level=info msg="StartContainer for \"e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a\"" Oct 31 00:44:22.773072 systemd[1]: Started cri-containerd-e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a.scope - libcontainer container e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a. Oct 31 00:44:22.812671 containerd[1476]: time="2025-10-31T00:44:22.812605137Z" level=info msg="StartContainer for \"e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a\" returns successfully" Oct 31 00:44:23.234706 kubelet[2507]: E1031 00:44:23.234653 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:24.136486 kubelet[2507]: E1031 00:44:24.136413 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:24.236875 kubelet[2507]: E1031 00:44:24.236816 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:24.615134 containerd[1476]: time="2025-10-31T00:44:24.615067191Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 00:44:24.617773 kubelet[2507]: I1031 00:44:24.617722 2507 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 00:44:24.618810 systemd[1]: cri-containerd-e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a.scope: Deactivated successfully. Oct 31 00:44:24.643848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a-rootfs.mount: Deactivated successfully. Oct 31 00:44:24.987299 containerd[1476]: time="2025-10-31T00:44:24.987095689Z" level=info msg="shim disconnected" id=e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a namespace=k8s.io Oct 31 00:44:24.987299 containerd[1476]: time="2025-10-31T00:44:24.987177663Z" level=warning msg="cleaning up after shim disconnected" id=e8f31b57d846c1b3dd538c977d92f1edd163cf1c495e9d0ba23df4066316ba4a namespace=k8s.io Oct 31 00:44:24.987299 containerd[1476]: time="2025-10-31T00:44:24.987190447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 00:44:25.332617 kubelet[2507]: E1031 00:44:25.332561 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:25.333829 containerd[1476]: time="2025-10-31T00:44:25.333682036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 00:44:25.340152 systemd[1]: Created slice kubepods-besteffort-pod249ea698_09d4_4be0_8fe6_e2048ed71a8b.slice - libcontainer container kubepods-besteffort-pod249ea698_09d4_4be0_8fe6_e2048ed71a8b.slice. Oct 31 00:44:25.417619 kubelet[2507]: I1031 00:44:25.417543 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jp9\" (UniqueName: \"kubernetes.io/projected/249ea698-09d4-4be0-8fe6-e2048ed71a8b-kube-api-access-t5jp9\") pod \"calico-kube-controllers-78c45f6ffd-bcknz\" (UID: \"249ea698-09d4-4be0-8fe6-e2048ed71a8b\") " pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" Oct 31 00:44:25.417619 kubelet[2507]: I1031 00:44:25.417610 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/249ea698-09d4-4be0-8fe6-e2048ed71a8b-tigera-ca-bundle\") pod \"calico-kube-controllers-78c45f6ffd-bcknz\" (UID: \"249ea698-09d4-4be0-8fe6-e2048ed71a8b\") " pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" Oct 31 00:44:25.624915 systemd[1]: Created slice kubepods-burstable-pod1fbbd014_0e03_4481_92c6_93eea54eedf4.slice - libcontainer container kubepods-burstable-pod1fbbd014_0e03_4481_92c6_93eea54eedf4.slice. Oct 31 00:44:25.722942 kubelet[2507]: I1031 00:44:25.719906 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fbbd014-0e03-4481-92c6-93eea54eedf4-config-volume\") pod \"coredns-66bc5c9577-klqvh\" (UID: \"1fbbd014-0e03-4481-92c6-93eea54eedf4\") " pod="kube-system/coredns-66bc5c9577-klqvh" Oct 31 00:44:25.722942 kubelet[2507]: I1031 00:44:25.720012 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qp79\" (UniqueName: \"kubernetes.io/projected/1fbbd014-0e03-4481-92c6-93eea54eedf4-kube-api-access-6qp79\") pod \"coredns-66bc5c9577-klqvh\" (UID: \"1fbbd014-0e03-4481-92c6-93eea54eedf4\") " pod="kube-system/coredns-66bc5c9577-klqvh" Oct 31 00:44:25.780716 systemd[1]: Created slice kubepods-besteffort-podb4d94e92_3c89_4ae2_96b2_8f348d872af0.slice - libcontainer container kubepods-besteffort-podb4d94e92_3c89_4ae2_96b2_8f348d872af0.slice. Oct 31 00:44:25.794855 systemd[1]: Created slice kubepods-besteffort-podce7f89dd_5bd8_47f1_bafb_e189c5d15727.slice - libcontainer container kubepods-besteffort-podce7f89dd_5bd8_47f1_bafb_e189c5d15727.slice. Oct 31 00:44:25.799761 systemd[1]: Created slice kubepods-besteffort-pod059647c6_592e_403f_9d8e_2ac4b74608a6.slice - libcontainer container kubepods-besteffort-pod059647c6_592e_403f_9d8e_2ac4b74608a6.slice. Oct 31 00:44:25.820950 kubelet[2507]: I1031 00:44:25.820866 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqbk\" (UniqueName: \"kubernetes.io/projected/059647c6-592e-403f-9d8e-2ac4b74608a6-kube-api-access-csqbk\") pod \"calico-apiserver-5976b79f87-6bc78\" (UID: \"059647c6-592e-403f-9d8e-2ac4b74608a6\") " pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" Oct 31 00:44:25.820950 kubelet[2507]: I1031 00:44:25.820945 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4d94e92-3c89-4ae2-96b2-8f348d872af0-config\") pod \"goldmane-7c778bb748-k86m7\" (UID: \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\") " pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:25.821306 kubelet[2507]: I1031 00:44:25.821166 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-ca-bundle\") pod \"whisker-75d855d4bf-vqwbm\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " pod="calico-system/whisker-75d855d4bf-vqwbm" Oct 31 00:44:25.821306 kubelet[2507]: I1031 00:44:25.821223 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/059647c6-592e-403f-9d8e-2ac4b74608a6-calico-apiserver-certs\") pod \"calico-apiserver-5976b79f87-6bc78\" (UID: \"059647c6-592e-403f-9d8e-2ac4b74608a6\") " pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" Oct 31 00:44:25.821306 kubelet[2507]: I1031 00:44:25.821249 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-backend-key-pair\") pod \"whisker-75d855d4bf-vqwbm\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " pod="calico-system/whisker-75d855d4bf-vqwbm" Oct 31 00:44:25.821306 kubelet[2507]: I1031 00:44:25.821277 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b4d94e92-3c89-4ae2-96b2-8f348d872af0-goldmane-key-pair\") pod \"goldmane-7c778bb748-k86m7\" (UID: \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\") " pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:25.821306 kubelet[2507]: I1031 00:44:25.821301 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d94e92-3c89-4ae2-96b2-8f348d872af0-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-k86m7\" (UID: \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\") " pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:25.821472 kubelet[2507]: I1031 00:44:25.821315 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs4wm\" (UniqueName: \"kubernetes.io/projected/b4d94e92-3c89-4ae2-96b2-8f348d872af0-kube-api-access-hs4wm\") pod \"goldmane-7c778bb748-k86m7\" (UID: \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\") " pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:25.821472 kubelet[2507]: I1031 00:44:25.821336 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g69dd\" (UniqueName: \"kubernetes.io/projected/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-kube-api-access-g69dd\") pod \"whisker-75d855d4bf-vqwbm\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " pod="calico-system/whisker-75d855d4bf-vqwbm" Oct 31 00:44:25.854307 systemd[1]: Created slice kubepods-besteffort-pod24efb227_abbe_46de_b752_2903fb4a14c0.slice - libcontainer container kubepods-besteffort-pod24efb227_abbe_46de_b752_2903fb4a14c0.slice. Oct 31 00:44:25.867123 systemd[1]: Created slice kubepods-burstable-pod7c270719_cb33_4792_9c98_48c89084c3a9.slice - libcontainer container kubepods-burstable-pod7c270719_cb33_4792_9c98_48c89084c3a9.slice. Oct 31 00:44:25.922350 kubelet[2507]: I1031 00:44:25.921980 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c270719-cb33-4792-9c98-48c89084c3a9-config-volume\") pod \"coredns-66bc5c9577-chb2v\" (UID: \"7c270719-cb33-4792-9c98-48c89084c3a9\") " pod="kube-system/coredns-66bc5c9577-chb2v" Oct 31 00:44:25.922350 kubelet[2507]: I1031 00:44:25.922070 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5m6s\" (UniqueName: \"kubernetes.io/projected/7c270719-cb33-4792-9c98-48c89084c3a9-kube-api-access-b5m6s\") pod \"coredns-66bc5c9577-chb2v\" (UID: \"7c270719-cb33-4792-9c98-48c89084c3a9\") " pod="kube-system/coredns-66bc5c9577-chb2v" Oct 31 00:44:25.924537 kubelet[2507]: I1031 00:44:25.923719 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24efb227-abbe-46de-b752-2903fb4a14c0-calico-apiserver-certs\") pod \"calico-apiserver-5976b79f87-gzf29\" (UID: \"24efb227-abbe-46de-b752-2903fb4a14c0\") " pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" Oct 31 00:44:25.924537 kubelet[2507]: I1031 00:44:25.923754 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb8h9\" (UniqueName: \"kubernetes.io/projected/24efb227-abbe-46de-b752-2903fb4a14c0-kube-api-access-vb8h9\") pod \"calico-apiserver-5976b79f87-gzf29\" (UID: \"24efb227-abbe-46de-b752-2903fb4a14c0\") " pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" Oct 31 00:44:26.047735 kubelet[2507]: E1031 00:44:26.047687 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:26.048985 containerd[1476]: time="2025-10-31T00:44:26.048904810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-klqvh,Uid:1fbbd014-0e03-4481-92c6-93eea54eedf4,Namespace:kube-system,Attempt:0,}" Oct 31 00:44:26.055191 containerd[1476]: time="2025-10-31T00:44:26.055132824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c45f6ffd-bcknz,Uid:249ea698-09d4-4be0-8fe6-e2048ed71a8b,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:26.097162 containerd[1476]: time="2025-10-31T00:44:26.097101750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k86m7,Uid:b4d94e92-3c89-4ae2-96b2-8f348d872af0,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:26.104455 containerd[1476]: time="2025-10-31T00:44:26.104102558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d855d4bf-vqwbm,Uid:ce7f89dd-5bd8-47f1-bafb-e189c5d15727,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:26.109209 containerd[1476]: time="2025-10-31T00:44:26.109151875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-6bc78,Uid:059647c6-592e-403f-9d8e-2ac4b74608a6,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:44:26.144087 systemd[1]: Created slice kubepods-besteffort-pod51ae5eae_434b_4353_bdcc_818b667dd4ed.slice - libcontainer container kubepods-besteffort-pod51ae5eae_434b_4353_bdcc_818b667dd4ed.slice. Oct 31 00:44:26.151792 containerd[1476]: time="2025-10-31T00:44:26.151248923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgpjr,Uid:51ae5eae-434b-4353-bdcc-818b667dd4ed,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:26.170647 containerd[1476]: time="2025-10-31T00:44:26.170336654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-gzf29,Uid:24efb227-abbe-46de-b752-2903fb4a14c0,Namespace:calico-apiserver,Attempt:0,}" Oct 31 00:44:26.173054 kubelet[2507]: E1031 00:44:26.172389 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:26.175231 containerd[1476]: time="2025-10-31T00:44:26.175197096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-chb2v,Uid:7c270719-cb33-4792-9c98-48c89084c3a9,Namespace:kube-system,Attempt:0,}" Oct 31 00:44:26.204613 containerd[1476]: time="2025-10-31T00:44:26.204536638Z" level=error msg="Failed to destroy network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.205041 containerd[1476]: time="2025-10-31T00:44:26.205006572Z" level=error msg="encountered an error cleaning up failed sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.205082 containerd[1476]: time="2025-10-31T00:44:26.205068970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c45f6ffd-bcknz,Uid:249ea698-09d4-4be0-8fe6-e2048ed71a8b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.205391 kubelet[2507]: E1031 00:44:26.205347 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.205472 kubelet[2507]: E1031 00:44:26.205442 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" Oct 31 00:44:26.205503 kubelet[2507]: E1031 00:44:26.205470 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" Oct 31 00:44:26.205557 kubelet[2507]: E1031 00:44:26.205528 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78c45f6ffd-bcknz_calico-system(249ea698-09d4-4be0-8fe6-e2048ed71a8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78c45f6ffd-bcknz_calico-system(249ea698-09d4-4be0-8fe6-e2048ed71a8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:44:26.224540 containerd[1476]: time="2025-10-31T00:44:26.224480780Z" level=error msg="Failed to destroy network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.225106 containerd[1476]: time="2025-10-31T00:44:26.225075238Z" level=error msg="encountered an error cleaning up failed sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.225256 containerd[1476]: time="2025-10-31T00:44:26.225232413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-klqvh,Uid:1fbbd014-0e03-4481-92c6-93eea54eedf4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.225715 kubelet[2507]: E1031 00:44:26.225677 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.225797 kubelet[2507]: E1031 00:44:26.225723 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-klqvh" Oct 31 00:44:26.225797 kubelet[2507]: E1031 00:44:26.225742 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-klqvh" Oct 31 00:44:26.225872 kubelet[2507]: E1031 00:44:26.225811 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-klqvh_kube-system(1fbbd014-0e03-4481-92c6-93eea54eedf4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-klqvh_kube-system(1fbbd014-0e03-4481-92c6-93eea54eedf4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-klqvh" podUID="1fbbd014-0e03-4481-92c6-93eea54eedf4" Oct 31 00:44:26.249019 kubelet[2507]: I1031 00:44:26.248402 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:26.249669 containerd[1476]: time="2025-10-31T00:44:26.249607980Z" level=info msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" Oct 31 00:44:26.252246 kubelet[2507]: I1031 00:44:26.252211 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:26.252340 containerd[1476]: time="2025-10-31T00:44:26.251287850Z" level=info msg="Ensure that sandbox 7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444 in task-service has been cleanup successfully" Oct 31 00:44:26.253873 containerd[1476]: time="2025-10-31T00:44:26.253839970Z" level=info msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" Oct 31 00:44:26.255606 containerd[1476]: time="2025-10-31T00:44:26.255568150Z" level=info msg="Ensure that sandbox 6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa in task-service has been cleanup successfully" Oct 31 00:44:26.265597 containerd[1476]: time="2025-10-31T00:44:26.265524076Z" level=error msg="Failed to destroy network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.266044 containerd[1476]: time="2025-10-31T00:44:26.266007355Z" level=error msg="encountered an error cleaning up failed sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.266097 containerd[1476]: time="2025-10-31T00:44:26.266071546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k86m7,Uid:b4d94e92-3c89-4ae2-96b2-8f348d872af0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.266352 kubelet[2507]: E1031 00:44:26.266309 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.266409 kubelet[2507]: E1031 00:44:26.266369 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:26.266409 kubelet[2507]: E1031 00:44:26.266390 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-k86m7" Oct 31 00:44:26.266477 kubelet[2507]: E1031 00:44:26.266449 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-k86m7_calico-system(b4d94e92-3c89-4ae2-96b2-8f348d872af0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-k86m7_calico-system(b4d94e92-3c89-4ae2-96b2-8f348d872af0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:26.303081 containerd[1476]: time="2025-10-31T00:44:26.303001475Z" level=error msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" failed" error="failed to destroy network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.303369 kubelet[2507]: E1031 00:44:26.303322 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:26.305087 kubelet[2507]: E1031 00:44:26.303390 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444"} Oct 31 00:44:26.305087 kubelet[2507]: E1031 00:44:26.303470 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fbbd014-0e03-4481-92c6-93eea54eedf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:26.305087 kubelet[2507]: E1031 00:44:26.303502 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fbbd014-0e03-4481-92c6-93eea54eedf4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-klqvh" podUID="1fbbd014-0e03-4481-92c6-93eea54eedf4" Oct 31 00:44:26.307937 containerd[1476]: time="2025-10-31T00:44:26.307863940Z" level=error msg="Failed to destroy network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310335 containerd[1476]: time="2025-10-31T00:44:26.309262250Z" level=error msg="Failed to destroy network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310335 containerd[1476]: time="2025-10-31T00:44:26.309725121Z" level=error msg="encountered an error cleaning up failed sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310335 containerd[1476]: time="2025-10-31T00:44:26.309766539Z" level=error msg="encountered an error cleaning up failed sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310335 containerd[1476]: time="2025-10-31T00:44:26.309840027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgpjr,Uid:51ae5eae-434b-4353-bdcc-818b667dd4ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310335 containerd[1476]: time="2025-10-31T00:44:26.309772160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d855d4bf-vqwbm,Uid:ce7f89dd-5bd8-47f1-bafb-e189c5d15727,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310576 kubelet[2507]: E1031 00:44:26.310169 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310576 kubelet[2507]: E1031 00:44:26.310247 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d855d4bf-vqwbm" Oct 31 00:44:26.310576 kubelet[2507]: E1031 00:44:26.310273 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d855d4bf-vqwbm" Oct 31 00:44:26.310732 kubelet[2507]: E1031 00:44:26.310346 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75d855d4bf-vqwbm_calico-system(ce7f89dd-5bd8-47f1-bafb-e189c5d15727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75d855d4bf-vqwbm_calico-system(ce7f89dd-5bd8-47f1-bafb-e189c5d15727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d855d4bf-vqwbm" podUID="ce7f89dd-5bd8-47f1-bafb-e189c5d15727" Oct 31 00:44:26.310732 kubelet[2507]: E1031 00:44:26.310604 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.310732 kubelet[2507]: E1031 00:44:26.310639 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:26.310979 kubelet[2507]: E1031 00:44:26.310657 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rgpjr" Oct 31 00:44:26.310979 kubelet[2507]: E1031 00:44:26.310694 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:26.316533 containerd[1476]: time="2025-10-31T00:44:26.316486899Z" level=error msg="Failed to destroy network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.316911 containerd[1476]: time="2025-10-31T00:44:26.316873185Z" level=error msg="encountered an error cleaning up failed sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.318516 containerd[1476]: time="2025-10-31T00:44:26.318369990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-6bc78,Uid:059647c6-592e-403f-9d8e-2ac4b74608a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.318707 kubelet[2507]: E1031 00:44:26.318670 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.318773 kubelet[2507]: E1031 00:44:26.318732 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" Oct 31 00:44:26.318773 kubelet[2507]: E1031 00:44:26.318751 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" Oct 31 00:44:26.318833 kubelet[2507]: E1031 00:44:26.318808 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5976b79f87-6bc78_calico-apiserver(059647c6-592e-403f-9d8e-2ac4b74608a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5976b79f87-6bc78_calico-apiserver(059647c6-592e-403f-9d8e-2ac4b74608a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:26.332104 containerd[1476]: time="2025-10-31T00:44:26.332045392Z" level=error msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" failed" error="failed to destroy network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.332369 kubelet[2507]: E1031 00:44:26.332325 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:26.332430 kubelet[2507]: E1031 00:44:26.332382 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa"} Oct 31 00:44:26.332430 kubelet[2507]: E1031 00:44:26.332420 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"249ea698-09d4-4be0-8fe6-e2048ed71a8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:26.332514 kubelet[2507]: E1031 00:44:26.332449 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"249ea698-09d4-4be0-8fe6-e2048ed71a8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:44:26.339185 containerd[1476]: time="2025-10-31T00:44:26.339128774Z" level=error msg="Failed to destroy network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.339677 containerd[1476]: time="2025-10-31T00:44:26.339643733Z" level=error msg="encountered an error cleaning up failed sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.339730 containerd[1476]: time="2025-10-31T00:44:26.339704637Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-gzf29,Uid:24efb227-abbe-46de-b752-2903fb4a14c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.340002 kubelet[2507]: E1031 00:44:26.339961 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.340441 kubelet[2507]: E1031 00:44:26.340023 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" Oct 31 00:44:26.340441 kubelet[2507]: E1031 00:44:26.340043 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" Oct 31 00:44:26.340441 kubelet[2507]: E1031 00:44:26.340095 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5976b79f87-gzf29_calico-apiserver(24efb227-abbe-46de-b752-2903fb4a14c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5976b79f87-gzf29_calico-apiserver(24efb227-abbe-46de-b752-2903fb4a14c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:44:26.350704 containerd[1476]: time="2025-10-31T00:44:26.350638873Z" level=error msg="Failed to destroy network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.351158 containerd[1476]: time="2025-10-31T00:44:26.351122062Z" level=error msg="encountered an error cleaning up failed sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.351214 containerd[1476]: time="2025-10-31T00:44:26.351185160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-chb2v,Uid:7c270719-cb33-4792-9c98-48c89084c3a9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.351543 kubelet[2507]: E1031 00:44:26.351488 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:26.351602 kubelet[2507]: E1031 00:44:26.351570 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-chb2v" Oct 31 00:44:26.351638 kubelet[2507]: E1031 00:44:26.351621 2507 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-chb2v" Oct 31 00:44:26.351713 kubelet[2507]: E1031 00:44:26.351689 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-chb2v_kube-system(7c270719-cb33-4792-9c98-48c89084c3a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-chb2v_kube-system(7c270719-cb33-4792-9c98-48c89084c3a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-chb2v" podUID="7c270719-cb33-4792-9c98-48c89084c3a9" Oct 31 00:44:27.255709 kubelet[2507]: I1031 00:44:27.255651 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:27.257143 containerd[1476]: time="2025-10-31T00:44:27.256484257Z" level=info msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" Oct 31 00:44:27.257143 containerd[1476]: time="2025-10-31T00:44:27.256697829Z" level=info msg="Ensure that sandbox b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d in task-service has been cleanup successfully" Oct 31 00:44:27.257615 kubelet[2507]: I1031 00:44:27.257428 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:27.258629 containerd[1476]: time="2025-10-31T00:44:27.258105295Z" level=info msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" Oct 31 00:44:27.258629 containerd[1476]: time="2025-10-31T00:44:27.258265697Z" level=info msg="Ensure that sandbox c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7 in task-service has been cleanup successfully" Oct 31 00:44:27.259733 kubelet[2507]: I1031 00:44:27.259688 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:27.260710 containerd[1476]: time="2025-10-31T00:44:27.260339056Z" level=info msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" Oct 31 00:44:27.260710 containerd[1476]: time="2025-10-31T00:44:27.260620124Z" level=info msg="Ensure that sandbox 830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573 in task-service has been cleanup successfully" Oct 31 00:44:27.261994 kubelet[2507]: I1031 00:44:27.261412 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:27.263862 containerd[1476]: time="2025-10-31T00:44:27.263819581Z" level=info msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" Oct 31 00:44:27.264123 containerd[1476]: time="2025-10-31T00:44:27.264085952Z" level=info msg="Ensure that sandbox 293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c in task-service has been cleanup successfully" Oct 31 00:44:27.266267 kubelet[2507]: I1031 00:44:27.266154 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:27.267233 containerd[1476]: time="2025-10-31T00:44:27.266836775Z" level=info msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" Oct 31 00:44:27.267233 containerd[1476]: time="2025-10-31T00:44:27.267098427Z" level=info msg="Ensure that sandbox 0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0 in task-service has been cleanup successfully" Oct 31 00:44:27.294343 kubelet[2507]: I1031 00:44:27.294286 2507 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:27.295222 containerd[1476]: time="2025-10-31T00:44:27.295182930Z" level=info msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" Oct 31 00:44:27.295437 containerd[1476]: time="2025-10-31T00:44:27.295403454Z" level=info msg="Ensure that sandbox 5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18 in task-service has been cleanup successfully" Oct 31 00:44:27.321021 containerd[1476]: time="2025-10-31T00:44:27.320956206Z" level=error msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" failed" error="failed to destroy network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.321614 kubelet[2507]: E1031 00:44:27.321559 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:27.321679 kubelet[2507]: E1031 00:44:27.321634 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c"} Oct 31 00:44:27.321679 kubelet[2507]: E1031 00:44:27.321669 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24efb227-abbe-46de-b752-2903fb4a14c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.321769 kubelet[2507]: E1031 00:44:27.321700 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24efb227-abbe-46de-b752-2903fb4a14c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:44:27.338886 containerd[1476]: time="2025-10-31T00:44:27.337188552Z" level=error msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" failed" error="failed to destroy network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.338886 containerd[1476]: time="2025-10-31T00:44:27.338830679Z" level=error msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" failed" error="failed to destroy network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.339218 kubelet[2507]: E1031 00:44:27.337469 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:27.339218 kubelet[2507]: E1031 00:44:27.337515 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d"} Oct 31 00:44:27.339218 kubelet[2507]: E1031 00:44:27.337545 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c270719-cb33-4792-9c98-48c89084c3a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.339218 kubelet[2507]: E1031 00:44:27.337570 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c270719-cb33-4792-9c98-48c89084c3a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-chb2v" podUID="7c270719-cb33-4792-9c98-48c89084c3a9" Oct 31 00:44:27.339541 kubelet[2507]: E1031 00:44:27.339038 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:27.339541 kubelet[2507]: E1031 00:44:27.339066 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573"} Oct 31 00:44:27.339541 kubelet[2507]: E1031 00:44:27.339091 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.339541 kubelet[2507]: E1031 00:44:27.339115 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d855d4bf-vqwbm" podUID="ce7f89dd-5bd8-47f1-bafb-e189c5d15727" Oct 31 00:44:27.349749 containerd[1476]: time="2025-10-31T00:44:27.349691503Z" level=error msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" failed" error="failed to destroy network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.350165 kubelet[2507]: E1031 00:44:27.350083 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:27.350165 kubelet[2507]: E1031 00:44:27.350162 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7"} Oct 31 00:44:27.350667 kubelet[2507]: E1031 00:44:27.350196 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51ae5eae-434b-4353-bdcc-818b667dd4ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.350667 kubelet[2507]: E1031 00:44:27.350226 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51ae5eae-434b-4353-bdcc-818b667dd4ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:27.350667 kubelet[2507]: E1031 00:44:27.350356 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:27.350667 kubelet[2507]: E1031 00:44:27.350381 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18"} Oct 31 00:44:27.350857 containerd[1476]: time="2025-10-31T00:44:27.350165794Z" level=error msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" failed" error="failed to destroy network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.350892 kubelet[2507]: E1031 00:44:27.350403 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.350892 kubelet[2507]: E1031 00:44:27.350425 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4d94e92-3c89-4ae2-96b2-8f348d872af0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:27.353348 containerd[1476]: time="2025-10-31T00:44:27.353281214Z" level=error msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" failed" error="failed to destroy network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 00:44:27.353525 kubelet[2507]: E1031 00:44:27.353487 2507 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:27.353602 kubelet[2507]: E1031 00:44:27.353529 2507 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0"} Oct 31 00:44:27.353639 kubelet[2507]: E1031 00:44:27.353607 2507 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"059647c6-592e-403f-9d8e-2ac4b74608a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 00:44:27.353734 kubelet[2507]: E1031 00:44:27.353634 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"059647c6-592e-403f-9d8e-2ac4b74608a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:30.952431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134787701.mount: Deactivated successfully. Oct 31 00:44:33.733249 containerd[1476]: time="2025-10-31T00:44:33.733180439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:33.743443 containerd[1476]: time="2025-10-31T00:44:33.743368151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 00:44:33.747184 containerd[1476]: time="2025-10-31T00:44:33.745698719Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:33.755063 containerd[1476]: time="2025-10-31T00:44:33.754962554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 00:44:33.756277 containerd[1476]: time="2025-10-31T00:44:33.756241267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.42249932s" Oct 31 00:44:33.756332 containerd[1476]: time="2025-10-31T00:44:33.756274981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 00:44:33.773227 containerd[1476]: time="2025-10-31T00:44:33.773163351Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 00:44:34.439635 containerd[1476]: time="2025-10-31T00:44:34.439565674Z" level=info msg="CreateContainer within sandbox \"3c9e1f8a50f7ffda9a7fee9b65100047d7bc313e454bfde1efd269699e4c508c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3\"" Oct 31 00:44:34.440197 containerd[1476]: time="2025-10-31T00:44:34.440169559Z" level=info msg="StartContainer for \"02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3\"" Oct 31 00:44:34.520110 systemd[1]: Started cri-containerd-02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3.scope - libcontainer container 02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3. Oct 31 00:44:34.655711 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 00:44:34.656044 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 00:44:34.656400 containerd[1476]: time="2025-10-31T00:44:34.656285160Z" level=info msg="StartContainer for \"02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3\" returns successfully" Oct 31 00:44:35.317273 kubelet[2507]: E1031 00:44:35.317210 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:35.595589 kubelet[2507]: I1031 00:44:35.595295 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tst7g" podStartSLOduration=3.013301909 podStartE2EDuration="22.595276385s" podCreationTimestamp="2025-10-31 00:44:13 +0000 UTC" firstStartedPulling="2025-10-31 00:44:14.175250259 +0000 UTC m=+20.139381845" lastFinishedPulling="2025-10-31 00:44:33.757224745 +0000 UTC m=+39.721356321" observedRunningTime="2025-10-31 00:44:35.594464921 +0000 UTC m=+41.558596527" watchObservedRunningTime="2025-10-31 00:44:35.595276385 +0000 UTC m=+41.559407971" Oct 31 00:44:35.643288 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:40292.service - OpenSSH per-connection server daemon (10.0.0.1:40292). Oct 31 00:44:35.650702 containerd[1476]: time="2025-10-31T00:44:35.649722408Z" level=info msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" Oct 31 00:44:35.698960 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 40292 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:35.700048 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:35.710783 systemd-logind[1450]: New session 8 of user core. Oct 31 00:44:35.715286 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.819 [INFO][3860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.821 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" iface="eth0" netns="/var/run/netns/cni-100261e5-ec5a-cd88-d088-d2e39d396f72" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.823 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" iface="eth0" netns="/var/run/netns/cni-100261e5-ec5a-cd88-d088-d2e39d396f72" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.823 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" iface="eth0" netns="/var/run/netns/cni-100261e5-ec5a-cd88-d088-d2e39d396f72" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.823 [INFO][3860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.823 [INFO][3860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.904 [INFO][3880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.905 [INFO][3880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.905 [INFO][3880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.912 [WARNING][3880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.912 [INFO][3880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.915 [INFO][3880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:35.922631 containerd[1476]: 2025-10-31 00:44:35.918 [INFO][3860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:35.924126 containerd[1476]: time="2025-10-31T00:44:35.924064819Z" level=info msg="TearDown network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" successfully" Oct 31 00:44:35.924126 containerd[1476]: time="2025-10-31T00:44:35.924115564Z" level=info msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" returns successfully" Oct 31 00:44:35.929338 sshd[3841]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:35.929593 systemd[1]: run-netns-cni\x2d100261e5\x2dec5a\x2dcd88\x2dd088\x2dd2e39d396f72.mount: Deactivated successfully. Oct 31 00:44:35.937253 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:40292.service: Deactivated successfully. Oct 31 00:44:35.940159 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 00:44:35.942802 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Oct 31 00:44:35.945594 systemd-logind[1450]: Removed session 8. Oct 31 00:44:36.092178 kubelet[2507]: I1031 00:44:36.092113 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-ca-bundle\") pod \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " Oct 31 00:44:36.092178 kubelet[2507]: I1031 00:44:36.092176 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-backend-key-pair\") pod \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " Oct 31 00:44:36.092400 kubelet[2507]: I1031 00:44:36.092220 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g69dd\" (UniqueName: \"kubernetes.io/projected/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-kube-api-access-g69dd\") pod \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\" (UID: \"ce7f89dd-5bd8-47f1-bafb-e189c5d15727\") " Oct 31 00:44:36.093147 kubelet[2507]: I1031 00:44:36.092992 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ce7f89dd-5bd8-47f1-bafb-e189c5d15727" (UID: "ce7f89dd-5bd8-47f1-bafb-e189c5d15727"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 00:44:36.098175 kubelet[2507]: I1031 00:44:36.098036 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ce7f89dd-5bd8-47f1-bafb-e189c5d15727" (UID: "ce7f89dd-5bd8-47f1-bafb-e189c5d15727"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 00:44:36.098175 kubelet[2507]: I1031 00:44:36.098109 2507 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-kube-api-access-g69dd" (OuterVolumeSpecName: "kube-api-access-g69dd") pod "ce7f89dd-5bd8-47f1-bafb-e189c5d15727" (UID: "ce7f89dd-5bd8-47f1-bafb-e189c5d15727"). InnerVolumeSpecName "kube-api-access-g69dd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 00:44:36.099035 systemd[1]: var-lib-kubelet-pods-ce7f89dd\x2d5bd8\x2d47f1\x2dbafb\x2de189c5d15727-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg69dd.mount: Deactivated successfully. Oct 31 00:44:36.102054 systemd[1]: var-lib-kubelet-pods-ce7f89dd\x2d5bd8\x2d47f1\x2dbafb\x2de189c5d15727-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 00:44:36.144966 systemd[1]: Removed slice kubepods-besteffort-podce7f89dd_5bd8_47f1_bafb_e189c5d15727.slice - libcontainer container kubepods-besteffort-podce7f89dd_5bd8_47f1_bafb_e189c5d15727.slice. Oct 31 00:44:36.193151 kubelet[2507]: I1031 00:44:36.192993 2507 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g69dd\" (UniqueName: \"kubernetes.io/projected/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-kube-api-access-g69dd\") on node \"localhost\" DevicePath \"\"" Oct 31 00:44:36.193151 kubelet[2507]: I1031 00:44:36.193027 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 00:44:36.193151 kubelet[2507]: I1031 00:44:36.193036 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce7f89dd-5bd8-47f1-bafb-e189c5d15727-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 00:44:36.371447 systemd[1]: Created slice kubepods-besteffort-pod905b0eae_aa04_4c59_afcc_92bdce8d8829.slice - libcontainer container kubepods-besteffort-pod905b0eae_aa04_4c59_afcc_92bdce8d8829.slice. Oct 31 00:44:36.494975 kubelet[2507]: I1031 00:44:36.494801 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/905b0eae-aa04-4c59-afcc-92bdce8d8829-whisker-backend-key-pair\") pod \"whisker-bcfbfb9d5-d68xk\" (UID: \"905b0eae-aa04-4c59-afcc-92bdce8d8829\") " pod="calico-system/whisker-bcfbfb9d5-d68xk" Oct 31 00:44:36.494975 kubelet[2507]: I1031 00:44:36.494870 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/905b0eae-aa04-4c59-afcc-92bdce8d8829-whisker-ca-bundle\") pod \"whisker-bcfbfb9d5-d68xk\" (UID: \"905b0eae-aa04-4c59-afcc-92bdce8d8829\") " pod="calico-system/whisker-bcfbfb9d5-d68xk" Oct 31 00:44:36.494975 kubelet[2507]: I1031 00:44:36.494895 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwtm\" (UniqueName: \"kubernetes.io/projected/905b0eae-aa04-4c59-afcc-92bdce8d8829-kube-api-access-6mwtm\") pod \"whisker-bcfbfb9d5-d68xk\" (UID: \"905b0eae-aa04-4c59-afcc-92bdce8d8829\") " pod="calico-system/whisker-bcfbfb9d5-d68xk" Oct 31 00:44:36.678259 containerd[1476]: time="2025-10-31T00:44:36.678202891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bcfbfb9d5-d68xk,Uid:905b0eae-aa04-4c59-afcc-92bdce8d8829,Namespace:calico-system,Attempt:0,}" Oct 31 00:44:36.799661 systemd-networkd[1409]: califfc3cfb2a01: Link UP Oct 31 00:44:36.800075 systemd-networkd[1409]: califfc3cfb2a01: Gained carrier Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.716 [INFO][3907] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.725 [INFO][3907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0 whisker-bcfbfb9d5- calico-system 905b0eae-aa04-4c59-afcc-92bdce8d8829 994 0 2025-10-31 00:44:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bcfbfb9d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-bcfbfb9d5-d68xk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califfc3cfb2a01 [] [] }} ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.725 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.754 [INFO][3920] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" HandleID="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Workload="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.754 [INFO][3920] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" HandleID="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Workload="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-bcfbfb9d5-d68xk", "timestamp":"2025-10-31 00:44:36.754405468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.754 [INFO][3920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.754 [INFO][3920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.754 [INFO][3920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.762 [INFO][3920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.767 [INFO][3920] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.771 [INFO][3920] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.773 [INFO][3920] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.775 [INFO][3920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.775 [INFO][3920] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.776 [INFO][3920] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5 Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.782 [INFO][3920] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.787 [INFO][3920] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.787 [INFO][3920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" host="localhost" Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.787 [INFO][3920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:36.814202 containerd[1476]: 2025-10-31 00:44:36.787 [INFO][3920] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" HandleID="k8s-pod-network.164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Workload="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.791 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0", GenerateName:"whisker-bcfbfb9d5-", Namespace:"calico-system", SelfLink:"", UID:"905b0eae-aa04-4c59-afcc-92bdce8d8829", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bcfbfb9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-bcfbfb9d5-d68xk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfc3cfb2a01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.791 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.791 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfc3cfb2a01 ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.800 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.800 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0", GenerateName:"whisker-bcfbfb9d5-", Namespace:"calico-system", SelfLink:"", UID:"905b0eae-aa04-4c59-afcc-92bdce8d8829", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bcfbfb9d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5", Pod:"whisker-bcfbfb9d5-d68xk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfc3cfb2a01", MAC:"b6:ff:0d:53:18:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:36.814825 containerd[1476]: 2025-10-31 00:44:36.810 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5" Namespace="calico-system" Pod="whisker-bcfbfb9d5-d68xk" WorkloadEndpoint="localhost-k8s-whisker--bcfbfb9d5--d68xk-eth0" Oct 31 00:44:36.842493 containerd[1476]: time="2025-10-31T00:44:36.842403429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:36.842493 containerd[1476]: time="2025-10-31T00:44:36.842457480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:36.842493 containerd[1476]: time="2025-10-31T00:44:36.842470375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:36.842612 containerd[1476]: time="2025-10-31T00:44:36.842555564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:36.863082 systemd[1]: Started cri-containerd-164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5.scope - libcontainer container 164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5. Oct 31 00:44:36.876429 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:36.900547 containerd[1476]: time="2025-10-31T00:44:36.900503364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bcfbfb9d5-d68xk,Uid:905b0eae-aa04-4c59-afcc-92bdce8d8829,Namespace:calico-system,Attempt:0,} returns sandbox id \"164413e7876f299b02e4a85873a3b2c5559febbf4045fda52865c83ba6a143d5\"" Oct 31 00:44:36.903836 containerd[1476]: time="2025-10-31T00:44:36.902627774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:44:37.246164 containerd[1476]: time="2025-10-31T00:44:37.246004471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:37.286232 containerd[1476]: time="2025-10-31T00:44:37.286167901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:44:37.297294 containerd[1476]: time="2025-10-31T00:44:37.290742301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:44:37.297575 kubelet[2507]: E1031 00:44:37.297528 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:44:37.297673 kubelet[2507]: E1031 00:44:37.297589 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:44:37.297711 kubelet[2507]: E1031 00:44:37.297697 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:37.298633 containerd[1476]: time="2025-10-31T00:44:37.298597725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:44:37.597635 kubelet[2507]: I1031 00:44:37.597565 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:44:37.598875 kubelet[2507]: E1031 00:44:37.598231 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:37.672268 containerd[1476]: time="2025-10-31T00:44:37.672207533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:37.673614 containerd[1476]: time="2025-10-31T00:44:37.673493348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:44:37.673779 containerd[1476]: time="2025-10-31T00:44:37.673581113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:44:37.673901 kubelet[2507]: E1031 00:44:37.673838 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:44:37.673987 kubelet[2507]: E1031 00:44:37.673905 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:44:37.674086 kubelet[2507]: E1031 00:44:37.674048 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:37.674148 kubelet[2507]: E1031 00:44:37.674109 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:44:38.138136 containerd[1476]: time="2025-10-31T00:44:38.137432138Z" level=info msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" Oct 31 00:44:38.142256 kubelet[2507]: I1031 00:44:38.142201 2507 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce7f89dd-5bd8-47f1-bafb-e189c5d15727" path="/var/lib/kubelet/pods/ce7f89dd-5bd8-47f1-bafb-e189c5d15727/volumes" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.192 [INFO][4093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.193 [INFO][4093] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" iface="eth0" netns="/var/run/netns/cni-06889e66-cf3c-7f7f-d0bc-2d34eb04564d" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.193 [INFO][4093] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" iface="eth0" netns="/var/run/netns/cni-06889e66-cf3c-7f7f-d0bc-2d34eb04564d" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.193 [INFO][4093] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" iface="eth0" netns="/var/run/netns/cni-06889e66-cf3c-7f7f-d0bc-2d34eb04564d" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.193 [INFO][4093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.193 [INFO][4093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.217 [INFO][4102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.217 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.217 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.223 [WARNING][4102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.223 [INFO][4102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.226 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:38.233350 containerd[1476]: 2025-10-31 00:44:38.229 [INFO][4093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:38.233802 containerd[1476]: time="2025-10-31T00:44:38.233520871Z" level=info msg="TearDown network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" successfully" Oct 31 00:44:38.233802 containerd[1476]: time="2025-10-31T00:44:38.233547801Z" level=info msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" returns successfully" Oct 31 00:44:38.237146 systemd[1]: run-netns-cni\x2d06889e66\x2dcf3c\x2d7f7f\x2dd0bc\x2d2d34eb04564d.mount: Deactivated successfully. Oct 31 00:44:38.240158 containerd[1476]: time="2025-10-31T00:44:38.240109995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgpjr,Uid:51ae5eae-434b-4353-bdcc-818b667dd4ed,Namespace:calico-system,Attempt:1,}" Oct 31 00:44:38.327261 kubelet[2507]: E1031 00:44:38.327212 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:38.335299 kubelet[2507]: E1031 00:44:38.335229 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:44:38.463543 systemd-networkd[1409]: calib1ae8323217: Link UP Oct 31 00:44:38.466805 systemd-networkd[1409]: calib1ae8323217: Gained carrier Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.291 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.310 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rgpjr-eth0 csi-node-driver- calico-system 51ae5eae-434b-4353-bdcc-818b667dd4ed 1018 0 2025-10-31 00:44:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rgpjr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib1ae8323217 [] [] }} ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.310 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.391 [INFO][4144] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" HandleID="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.392 [INFO][4144] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" HandleID="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b43d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rgpjr", "timestamp":"2025-10-31 00:44:38.391630453 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.392 [INFO][4144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.392 [INFO][4144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.392 [INFO][4144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.403 [INFO][4144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.427 [INFO][4144] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.433 [INFO][4144] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.436 [INFO][4144] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.439 [INFO][4144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.439 [INFO][4144] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.440 [INFO][4144] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.448 [INFO][4144] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.454 [INFO][4144] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.454 [INFO][4144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" host="localhost" Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.454 [INFO][4144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:38.480756 containerd[1476]: 2025-10-31 00:44:38.454 [INFO][4144] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" HandleID="k8s-pod-network.cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.460 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rgpjr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51ae5eae-434b-4353-bdcc-818b667dd4ed", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rgpjr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1ae8323217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.460 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.460 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1ae8323217 ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.464 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.464 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rgpjr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51ae5eae-434b-4353-bdcc-818b667dd4ed", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd", Pod:"csi-node-driver-rgpjr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1ae8323217", MAC:"96:f6:af:c8:a2:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:38.481610 containerd[1476]: 2025-10-31 00:44:38.475 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd" Namespace="calico-system" Pod="csi-node-driver-rgpjr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:38.488172 systemd-networkd[1409]: califfc3cfb2a01: Gained IPv6LL Oct 31 00:44:38.605969 kernel: bpftool[4198]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 00:44:38.875872 containerd[1476]: time="2025-10-31T00:44:38.872969071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:38.875872 containerd[1476]: time="2025-10-31T00:44:38.873289383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:38.875872 containerd[1476]: time="2025-10-31T00:44:38.873306064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:38.875872 containerd[1476]: time="2025-10-31T00:44:38.873457558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:38.916111 systemd-networkd[1409]: vxlan.calico: Link UP Oct 31 00:44:38.916129 systemd-networkd[1409]: vxlan.calico: Gained carrier Oct 31 00:44:38.919087 systemd[1]: Started cri-containerd-cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd.scope - libcontainer container cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd. Oct 31 00:44:38.935219 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:38.949438 containerd[1476]: time="2025-10-31T00:44:38.949202769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgpjr,Uid:51ae5eae-434b-4353-bdcc-818b667dd4ed,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd\"" Oct 31 00:44:38.951272 containerd[1476]: time="2025-10-31T00:44:38.950799077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:44:39.137102 containerd[1476]: time="2025-10-31T00:44:39.136937823Z" level=info msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" Oct 31 00:44:39.137502 containerd[1476]: time="2025-10-31T00:44:39.137462147Z" level=info msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.195 [INFO][4306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.195 [INFO][4306] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" iface="eth0" netns="/var/run/netns/cni-53de7b0c-8d22-8423-3a5e-166ad9ffb1f2" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.195 [INFO][4306] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" iface="eth0" netns="/var/run/netns/cni-53de7b0c-8d22-8423-3a5e-166ad9ffb1f2" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.196 [INFO][4306] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" iface="eth0" netns="/var/run/netns/cni-53de7b0c-8d22-8423-3a5e-166ad9ffb1f2" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.196 [INFO][4306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.196 [INFO][4306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.228 [INFO][4324] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.228 [INFO][4324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.229 [INFO][4324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.237 [WARNING][4324] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.237 [INFO][4324] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.239 [INFO][4324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:39.250072 containerd[1476]: 2025-10-31 00:44:39.243 [INFO][4306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:39.254240 containerd[1476]: time="2025-10-31T00:44:39.250508116Z" level=info msg="TearDown network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" successfully" Oct 31 00:44:39.254240 containerd[1476]: time="2025-10-31T00:44:39.250537462Z" level=info msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" returns successfully" Oct 31 00:44:39.254052 systemd[1]: run-netns-cni\x2d53de7b0c\x2d8d22\x2d8423\x2d3a5e\x2d166ad9ffb1f2.mount: Deactivated successfully. Oct 31 00:44:39.260866 kubelet[2507]: E1031 00:44:39.259485 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:39.261312 containerd[1476]: time="2025-10-31T00:44:39.260094978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-klqvh,Uid:1fbbd014-0e03-4481-92c6-93eea54eedf4,Namespace:kube-system,Attempt:1,}" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.212 [INFO][4305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.213 [INFO][4305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" iface="eth0" netns="/var/run/netns/cni-eb9e0c1e-1cd3-75b0-6482-99f433b5ea60" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.214 [INFO][4305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" iface="eth0" netns="/var/run/netns/cni-eb9e0c1e-1cd3-75b0-6482-99f433b5ea60" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.214 [INFO][4305] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" iface="eth0" netns="/var/run/netns/cni-eb9e0c1e-1cd3-75b0-6482-99f433b5ea60" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.214 [INFO][4305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.214 [INFO][4305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.258 [INFO][4337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.258 [INFO][4337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.259 [INFO][4337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.269 [WARNING][4337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.270 [INFO][4337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.272 [INFO][4337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:39.280742 containerd[1476]: 2025-10-31 00:44:39.276 [INFO][4305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:39.280742 containerd[1476]: time="2025-10-31T00:44:39.280507407Z" level=info msg="TearDown network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" successfully" Oct 31 00:44:39.280742 containerd[1476]: time="2025-10-31T00:44:39.280541401Z" level=info msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" returns successfully" Oct 31 00:44:39.284364 systemd[1]: run-netns-cni\x2deb9e0c1e\x2d1cd3\x2d75b0\x2d6482\x2d99f433b5ea60.mount: Deactivated successfully. Oct 31 00:44:39.288309 containerd[1476]: time="2025-10-31T00:44:39.284709909Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:39.291009 containerd[1476]: time="2025-10-31T00:44:39.290945316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:44:39.291087 containerd[1476]: time="2025-10-31T00:44:39.291005059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:44:39.291706 kubelet[2507]: E1031 00:44:39.291627 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:44:39.291788 kubelet[2507]: E1031 00:44:39.291708 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:44:39.291973 kubelet[2507]: E1031 00:44:39.291805 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:39.293492 kubelet[2507]: E1031 00:44:39.293380 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:39.296555 containerd[1476]: time="2025-10-31T00:44:39.296428091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-chb2v,Uid:7c270719-cb33-4792-9c98-48c89084c3a9,Namespace:kube-system,Attempt:1,}" Oct 31 00:44:39.300432 containerd[1476]: time="2025-10-31T00:44:39.299653598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:44:39.443829 systemd-networkd[1409]: cali54582a20f38: Link UP Oct 31 00:44:39.444235 systemd-networkd[1409]: cali54582a20f38: Gained carrier Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.364 [INFO][4364] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--klqvh-eth0 coredns-66bc5c9577- kube-system 1fbbd014-0e03-4481-92c6-93eea54eedf4 1036 0 2025-10-31 00:44:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-klqvh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali54582a20f38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.365 [INFO][4364] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.404 [INFO][4403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" HandleID="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.404 [INFO][4403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" HandleID="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-klqvh", "timestamp":"2025-10-31 00:44:39.404299345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.404 [INFO][4403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.404 [INFO][4403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.404 [INFO][4403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.411 [INFO][4403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.416 [INFO][4403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.420 [INFO][4403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.422 [INFO][4403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.424 [INFO][4403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.424 [INFO][4403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.425 [INFO][4403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2 Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.429 [INFO][4403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" host="localhost" Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:39.463367 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" HandleID="k8s-pod-network.030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.439 [INFO][4364] cni-plugin/k8s.go 418: Populated endpoint ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--klqvh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fbbd014-0e03-4481-92c6-93eea54eedf4", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-klqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54582a20f38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.439 [INFO][4364] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.439 [INFO][4364] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54582a20f38 ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.443 [INFO][4364] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.444 [INFO][4364] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--klqvh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fbbd014-0e03-4481-92c6-93eea54eedf4", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2", Pod:"coredns-66bc5c9577-klqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54582a20f38", MAC:"06:f5:72:94:8c:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:39.464197 containerd[1476]: 2025-10-31 00:44:39.458 [INFO][4364] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2" Namespace="kube-system" Pod="coredns-66bc5c9577-klqvh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:39.487560 containerd[1476]: time="2025-10-31T00:44:39.487400907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:39.487560 containerd[1476]: time="2025-10-31T00:44:39.487480045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:39.487560 containerd[1476]: time="2025-10-31T00:44:39.487491136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:39.487957 containerd[1476]: time="2025-10-31T00:44:39.487575153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:39.519232 systemd[1]: Started cri-containerd-030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2.scope - libcontainer container 030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2. Oct 31 00:44:39.535184 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:39.549770 systemd-networkd[1409]: calidbae70475d0: Link UP Oct 31 00:44:39.551748 systemd-networkd[1409]: calidbae70475d0: Gained carrier Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.375 [INFO][4384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--chb2v-eth0 coredns-66bc5c9577- kube-system 7c270719-cb33-4792-9c98-48c89084c3a9 1037 0 2025-10-31 00:44:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-chb2v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidbae70475d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.375 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.410 [INFO][4410] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" HandleID="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.410 [INFO][4410] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" HandleID="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-chb2v", "timestamp":"2025-10-31 00:44:39.410331561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.410 [INFO][4410] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4410] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.436 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.513 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.519 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.524 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.526 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.528 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.528 [INFO][4410] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.529 [INFO][4410] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0 Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.534 [INFO][4410] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.541 [INFO][4410] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.541 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" host="localhost" Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.541 [INFO][4410] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:39.565945 containerd[1476]: 2025-10-31 00:44:39.541 [INFO][4410] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" HandleID="k8s-pod-network.ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.546 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--chb2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c270719-cb33-4792-9c98-48c89084c3a9", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-chb2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbae70475d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.546 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.546 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbae70475d0 ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.550 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.551 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--chb2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c270719-cb33-4792-9c98-48c89084c3a9", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0", Pod:"coredns-66bc5c9577-chb2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbae70475d0", MAC:"82:ac:71:ee:ba:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:39.566610 containerd[1476]: 2025-10-31 00:44:39.562 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0" Namespace="kube-system" Pod="coredns-66bc5c9577-chb2v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:39.576527 containerd[1476]: time="2025-10-31T00:44:39.576332285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-klqvh,Uid:1fbbd014-0e03-4481-92c6-93eea54eedf4,Namespace:kube-system,Attempt:1,} returns sandbox id \"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2\"" Oct 31 00:44:39.577691 kubelet[2507]: E1031 00:44:39.577558 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:39.585217 containerd[1476]: time="2025-10-31T00:44:39.584702052Z" level=info msg="CreateContainer within sandbox \"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:44:39.598016 containerd[1476]: time="2025-10-31T00:44:39.597895325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:39.598016 containerd[1476]: time="2025-10-31T00:44:39.597974744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:39.598238 containerd[1476]: time="2025-10-31T00:44:39.598161795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:39.599014 containerd[1476]: time="2025-10-31T00:44:39.598910241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:39.607224 containerd[1476]: time="2025-10-31T00:44:39.607167145Z" level=info msg="CreateContainer within sandbox \"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ae478e66690be5ccde6501b2178f4fc7382f35c4e6104559d314f2132c0a771\"" Oct 31 00:44:39.608966 containerd[1476]: time="2025-10-31T00:44:39.608066002Z" level=info msg="StartContainer for \"4ae478e66690be5ccde6501b2178f4fc7382f35c4e6104559d314f2132c0a771\"" Oct 31 00:44:39.622130 systemd[1]: Started cri-containerd-ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0.scope - libcontainer container ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0. Oct 31 00:44:39.644056 systemd[1]: Started cri-containerd-4ae478e66690be5ccde6501b2178f4fc7382f35c4e6104559d314f2132c0a771.scope - libcontainer container 4ae478e66690be5ccde6501b2178f4fc7382f35c4e6104559d314f2132c0a771. Oct 31 00:44:39.648275 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:39.678541 containerd[1476]: time="2025-10-31T00:44:39.678493467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-chb2v,Uid:7c270719-cb33-4792-9c98-48c89084c3a9,Namespace:kube-system,Attempt:1,} returns sandbox id \"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0\"" Oct 31 00:44:39.679513 kubelet[2507]: E1031 00:44:39.679488 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:39.687882 containerd[1476]: time="2025-10-31T00:44:39.687837712Z" level=info msg="CreateContainer within sandbox \"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 00:44:39.691063 containerd[1476]: time="2025-10-31T00:44:39.691025908Z" level=info msg="StartContainer for \"4ae478e66690be5ccde6501b2178f4fc7382f35c4e6104559d314f2132c0a771\" returns successfully" Oct 31 00:44:39.729616 containerd[1476]: time="2025-10-31T00:44:39.708075553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:39.955366 containerd[1476]: time="2025-10-31T00:44:39.955157957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:44:39.955366 containerd[1476]: time="2025-10-31T00:44:39.955217539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:44:39.955633 kubelet[2507]: E1031 00:44:39.955565 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:44:39.955691 kubelet[2507]: E1031 00:44:39.955658 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:44:39.955860 kubelet[2507]: E1031 00:44:39.955823 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:39.955972 kubelet[2507]: E1031 00:44:39.955903 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:40.152169 systemd-networkd[1409]: calib1ae8323217: Gained IPv6LL Oct 31 00:44:40.314490 containerd[1476]: time="2025-10-31T00:44:40.314422801Z" level=info msg="CreateContainer within sandbox \"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce\"" Oct 31 00:44:40.317326 containerd[1476]: time="2025-10-31T00:44:40.316734892Z" level=info msg="StartContainer for \"91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce\"" Oct 31 00:44:40.352730 kubelet[2507]: E1031 00:44:40.352655 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:40.355499 kubelet[2507]: E1031 00:44:40.353837 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:40.372454 systemd[1]: run-containerd-runc-k8s.io-91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce-runc.l6899u.mount: Deactivated successfully. Oct 31 00:44:40.383171 systemd[1]: Started cri-containerd-91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce.scope - libcontainer container 91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce. Oct 31 00:44:40.421407 containerd[1476]: time="2025-10-31T00:44:40.421178163Z" level=info msg="StartContainer for \"91aed532bce667d241dd8f1c5e737ee986c92cebb471fb12a11ed9492880bbce\" returns successfully" Oct 31 00:44:40.600149 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Oct 31 00:44:40.920168 systemd-networkd[1409]: cali54582a20f38: Gained IPv6LL Oct 31 00:44:40.947051 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:42338.service - OpenSSH per-connection server daemon (10.0.0.1:42338). Oct 31 00:44:40.996040 sshd[4606]: Accepted publickey for core from 10.0.0.1 port 42338 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:40.998263 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:41.003652 systemd-logind[1450]: New session 9 of user core. Oct 31 00:44:41.011209 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 00:44:41.137026 containerd[1476]: time="2025-10-31T00:44:41.136943421Z" level=info msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" Oct 31 00:44:41.169499 sshd[4606]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:41.176339 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:42338.service: Deactivated successfully. Oct 31 00:44:41.181326 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 00:44:41.183989 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Oct 31 00:44:41.186342 systemd-logind[1450]: Removed session 9. Oct 31 00:44:41.209447 kubelet[2507]: I1031 00:44:41.209341 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-klqvh" podStartSLOduration=40.209291219 podStartE2EDuration="40.209291219s" podCreationTimestamp="2025-10-31 00:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:44:40.391138111 +0000 UTC m=+46.355269717" watchObservedRunningTime="2025-10-31 00:44:41.209291219 +0000 UTC m=+47.173422805" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.209 [INFO][4629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.209 [INFO][4629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" iface="eth0" netns="/var/run/netns/cni-38dacf9c-9e0f-e09a-7319-614ad112d96c" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.210 [INFO][4629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" iface="eth0" netns="/var/run/netns/cni-38dacf9c-9e0f-e09a-7319-614ad112d96c" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.210 [INFO][4629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" iface="eth0" netns="/var/run/netns/cni-38dacf9c-9e0f-e09a-7319-614ad112d96c" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.210 [INFO][4629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.210 [INFO][4629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.233 [INFO][4640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.233 [INFO][4640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.233 [INFO][4640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.240 [WARNING][4640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.240 [INFO][4640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.241 [INFO][4640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:41.248028 containerd[1476]: 2025-10-31 00:44:41.244 [INFO][4629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:41.248582 containerd[1476]: time="2025-10-31T00:44:41.248195980Z" level=info msg="TearDown network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" successfully" Oct 31 00:44:41.248582 containerd[1476]: time="2025-10-31T00:44:41.248223011Z" level=info msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" returns successfully" Oct 31 00:44:41.252100 systemd[1]: run-netns-cni\x2d38dacf9c\x2d9e0f\x2de09a\x2d7319\x2d614ad112d96c.mount: Deactivated successfully. Oct 31 00:44:41.254365 containerd[1476]: time="2025-10-31T00:44:41.254296494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c45f6ffd-bcknz,Uid:249ea698-09d4-4be0-8fe6-e2048ed71a8b,Namespace:calico-system,Attempt:1,}" Oct 31 00:44:41.358579 kubelet[2507]: E1031 00:44:41.358043 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:41.358579 kubelet[2507]: E1031 00:44:41.358125 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:41.378368 systemd-networkd[1409]: cali95f14768d34: Link UP Oct 31 00:44:41.379264 systemd-networkd[1409]: cali95f14768d34: Gained carrier Oct 31 00:44:41.394614 kubelet[2507]: I1031 00:44:41.394134 2507 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-chb2v" podStartSLOduration=40.393871047 podStartE2EDuration="40.393871047s" podCreationTimestamp="2025-10-31 00:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 00:44:41.372004623 +0000 UTC m=+47.336136209" watchObservedRunningTime="2025-10-31 00:44:41.393871047 +0000 UTC m=+47.358002633" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.307 [INFO][4647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0 calico-kube-controllers-78c45f6ffd- calico-system 249ea698-09d4-4be0-8fe6-e2048ed71a8b 1111 0 2025-10-31 00:44:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78c45f6ffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78c45f6ffd-bcknz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95f14768d34 [] [] }} ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.307 [INFO][4647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.333 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" HandleID="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.333 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" HandleID="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78c45f6ffd-bcknz", "timestamp":"2025-10-31 00:44:41.333739813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.333 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.334 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.334 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.340 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.344 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.348 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.351 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.353 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.353 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.355 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1 Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.360 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.368 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.368 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" host="localhost" Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.368 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:41.401183 containerd[1476]: 2025-10-31 00:44:41.368 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" HandleID="k8s-pod-network.205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.374 [INFO][4647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0", GenerateName:"calico-kube-controllers-78c45f6ffd-", Namespace:"calico-system", SelfLink:"", UID:"249ea698-09d4-4be0-8fe6-e2048ed71a8b", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c45f6ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78c45f6ffd-bcknz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95f14768d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.374 [INFO][4647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.374 [INFO][4647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95f14768d34 ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.378 [INFO][4647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.380 [INFO][4647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0", GenerateName:"calico-kube-controllers-78c45f6ffd-", Namespace:"calico-system", SelfLink:"", UID:"249ea698-09d4-4be0-8fe6-e2048ed71a8b", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c45f6ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1", Pod:"calico-kube-controllers-78c45f6ffd-bcknz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95f14768d34", MAC:"ea:be:f0:51:11:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:41.402410 containerd[1476]: 2025-10-31 00:44:41.393 [INFO][4647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1" Namespace="calico-system" Pod="calico-kube-controllers-78c45f6ffd-bcknz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:41.434094 systemd-networkd[1409]: calidbae70475d0: Gained IPv6LL Oct 31 00:44:41.439910 containerd[1476]: time="2025-10-31T00:44:41.439786681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:41.439910 containerd[1476]: time="2025-10-31T00:44:41.439850961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:41.439910 containerd[1476]: time="2025-10-31T00:44:41.439863765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:41.440266 containerd[1476]: time="2025-10-31T00:44:41.440001684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:41.476297 systemd[1]: Started cri-containerd-205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1.scope - libcontainer container 205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1. Oct 31 00:44:41.502831 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:41.534112 containerd[1476]: time="2025-10-31T00:44:41.534064501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c45f6ffd-bcknz,Uid:249ea698-09d4-4be0-8fe6-e2048ed71a8b,Namespace:calico-system,Attempt:1,} returns sandbox id \"205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1\"" Oct 31 00:44:41.535956 containerd[1476]: time="2025-10-31T00:44:41.535876072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:44:41.905561 containerd[1476]: time="2025-10-31T00:44:41.905491574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:41.906645 containerd[1476]: time="2025-10-31T00:44:41.906606477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:44:41.906823 containerd[1476]: time="2025-10-31T00:44:41.906657222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:44:41.906934 kubelet[2507]: E1031 00:44:41.906881 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:44:41.907012 kubelet[2507]: E1031 00:44:41.906952 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:44:41.907068 kubelet[2507]: E1031 00:44:41.907045 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-78c45f6ffd-bcknz_calico-system(249ea698-09d4-4be0-8fe6-e2048ed71a8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:41.907144 kubelet[2507]: E1031 00:44:41.907079 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:44:42.137711 containerd[1476]: time="2025-10-31T00:44:42.137232145Z" level=info msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" Oct 31 00:44:42.137711 containerd[1476]: time="2025-10-31T00:44:42.137557666Z" level=info msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" Oct 31 00:44:42.137955 containerd[1476]: time="2025-10-31T00:44:42.137590367Z" level=info msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.211 [INFO][4756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.211 [INFO][4756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" iface="eth0" netns="/var/run/netns/cni-d97200ee-5932-86b7-162a-59d7338f7bd1" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.211 [INFO][4756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" iface="eth0" netns="/var/run/netns/cni-d97200ee-5932-86b7-162a-59d7338f7bd1" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.212 [INFO][4756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" iface="eth0" netns="/var/run/netns/cni-d97200ee-5932-86b7-162a-59d7338f7bd1" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.212 [INFO][4756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.212 [INFO][4756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.247 [INFO][4779] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.247 [INFO][4779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.247 [INFO][4779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.255 [WARNING][4779] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.255 [INFO][4779] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.256 [INFO][4779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.263859 containerd[1476]: 2025-10-31 00:44:42.259 [INFO][4756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:42.265916 containerd[1476]: time="2025-10-31T00:44:42.264063898Z" level=info msg="TearDown network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" successfully" Oct 31 00:44:42.265916 containerd[1476]: time="2025-10-31T00:44:42.265909361Z" level=info msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" returns successfully" Oct 31 00:44:42.267252 systemd[1]: run-netns-cni\x2dd97200ee\x2d5932\x2d86b7\x2d162a\x2d59d7338f7bd1.mount: Deactivated successfully. Oct 31 00:44:42.277832 containerd[1476]: time="2025-10-31T00:44:42.277761292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-6bc78,Uid:059647c6-592e-403f-9d8e-2ac4b74608a6,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.224 [INFO][4757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.224 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" iface="eth0" netns="/var/run/netns/cni-f96fc337-4115-d12d-d8f1-9ea56134dbf0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.224 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" iface="eth0" netns="/var/run/netns/cni-f96fc337-4115-d12d-d8f1-9ea56134dbf0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.225 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" iface="eth0" netns="/var/run/netns/cni-f96fc337-4115-d12d-d8f1-9ea56134dbf0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.225 [INFO][4757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.225 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.268 [INFO][4787] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.269 [INFO][4787] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.269 [INFO][4787] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.275 [WARNING][4787] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.275 [INFO][4787] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.277 [INFO][4787] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.282713 containerd[1476]: 2025-10-31 00:44:42.280 [INFO][4757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:42.285191 containerd[1476]: time="2025-10-31T00:44:42.285155403Z" level=info msg="TearDown network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" successfully" Oct 31 00:44:42.285191 containerd[1476]: time="2025-10-31T00:44:42.285190238Z" level=info msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" returns successfully" Oct 31 00:44:42.288796 systemd[1]: run-netns-cni\x2df96fc337\x2d4115\x2dd12d\x2dd8f1\x2d9ea56134dbf0.mount: Deactivated successfully. Oct 31 00:44:42.290821 containerd[1476]: time="2025-10-31T00:44:42.290778619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-gzf29,Uid:24efb227-abbe-46de-b752-2903fb4a14c0,Namespace:calico-apiserver,Attempt:1,}" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.240 [INFO][4762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.240 [INFO][4762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" iface="eth0" netns="/var/run/netns/cni-d38a02d4-4c22-2c52-28fc-8f2922180a54" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.240 [INFO][4762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" iface="eth0" netns="/var/run/netns/cni-d38a02d4-4c22-2c52-28fc-8f2922180a54" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.241 [INFO][4762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" iface="eth0" netns="/var/run/netns/cni-d38a02d4-4c22-2c52-28fc-8f2922180a54" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.241 [INFO][4762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.241 [INFO][4762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.273 [INFO][4792] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.273 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.277 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.283 [WARNING][4792] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.283 [INFO][4792] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.285 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.292520 containerd[1476]: 2025-10-31 00:44:42.289 [INFO][4762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:42.293250 containerd[1476]: time="2025-10-31T00:44:42.292876838Z" level=info msg="TearDown network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" successfully" Oct 31 00:44:42.293250 containerd[1476]: time="2025-10-31T00:44:42.292906534Z" level=info msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" returns successfully" Oct 31 00:44:42.296301 systemd[1]: run-netns-cni\x2dd38a02d4\x2d4c22\x2d2c52\x2d28fc\x2d8f2922180a54.mount: Deactivated successfully. Oct 31 00:44:42.298060 containerd[1476]: time="2025-10-31T00:44:42.298029782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k86m7,Uid:b4d94e92-3c89-4ae2-96b2-8f348d872af0,Namespace:calico-system,Attempt:1,}" Oct 31 00:44:42.362979 kubelet[2507]: E1031 00:44:42.362783 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:42.364606 kubelet[2507]: E1031 00:44:42.363956 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:42.365254 kubelet[2507]: E1031 00:44:42.365166 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:44:42.447178 systemd-networkd[1409]: cali62c0971c7e8: Link UP Oct 31 00:44:42.447996 systemd-networkd[1409]: cali62c0971c7e8: Gained carrier Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.348 [INFO][4804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0 calico-apiserver-5976b79f87- calico-apiserver 059647c6-592e-403f-9d8e-2ac4b74608a6 1142 0 2025-10-31 00:44:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5976b79f87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5976b79f87-6bc78 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali62c0971c7e8 [] [] }} ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.349 [INFO][4804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.400 [INFO][4845] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" HandleID="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.400 [INFO][4845] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" HandleID="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052cb60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5976b79f87-6bc78", "timestamp":"2025-10-31 00:44:42.400368082 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.400 [INFO][4845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.400 [INFO][4845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.400 [INFO][4845] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.407 [INFO][4845] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.411 [INFO][4845] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.415 [INFO][4845] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.417 [INFO][4845] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.419 [INFO][4845] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.419 [INFO][4845] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.422 [INFO][4845] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.427 [INFO][4845] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4845] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4845] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" host="localhost" Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.467373 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4845] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" HandleID="k8s-pod-network.11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.441 [INFO][4804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"059647c6-592e-403f-9d8e-2ac4b74608a6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5976b79f87-6bc78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62c0971c7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.441 [INFO][4804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.442 [INFO][4804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62c0971c7e8 ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.445 [INFO][4804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.446 [INFO][4804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"059647c6-592e-403f-9d8e-2ac4b74608a6", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d", Pod:"calico-apiserver-5976b79f87-6bc78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62c0971c7e8", MAC:"1a:ce:14:ec:e9:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.468772 containerd[1476]: 2025-10-31 00:44:42.457 [INFO][4804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-6bc78" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:42.499464 containerd[1476]: time="2025-10-31T00:44:42.499301852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:42.501941 containerd[1476]: time="2025-10-31T00:44:42.500225726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:42.501941 containerd[1476]: time="2025-10-31T00:44:42.500312609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.501941 containerd[1476]: time="2025-10-31T00:44:42.500458192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.545587 systemd[1]: Started cri-containerd-11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d.scope - libcontainer container 11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d. Oct 31 00:44:42.564512 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:42.592309 systemd-networkd[1409]: cali61b1c1efdf7: Link UP Oct 31 00:44:42.594028 containerd[1476]: time="2025-10-31T00:44:42.593991874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-6bc78,Uid:059647c6-592e-403f-9d8e-2ac4b74608a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d\"" Oct 31 00:44:42.594901 systemd-networkd[1409]: cali61b1c1efdf7: Gained carrier Oct 31 00:44:42.599689 containerd[1476]: time="2025-10-31T00:44:42.599447576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.365 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--k86m7-eth0 goldmane-7c778bb748- calico-system b4d94e92-3c89-4ae2-96b2-8f348d872af0 1144 0 2025-10-31 00:44:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-k86m7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali61b1c1efdf7 [] [] }} ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.365 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.412 [INFO][4853] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" HandleID="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.413 [INFO][4853] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" HandleID="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-k86m7", "timestamp":"2025-10-31 00:44:42.41290062 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.413 [INFO][4853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.435 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.511 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.528 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.543 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.547 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.550 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.550 [INFO][4853] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.551 [INFO][4853] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161 Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.555 [INFO][4853] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.580 [INFO][4853] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.580 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" host="localhost" Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.580 [INFO][4853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.689040 containerd[1476]: 2025-10-31 00:44:42.580 [INFO][4853] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" HandleID="k8s-pod-network.869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.587 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k86m7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b4d94e92-3c89-4ae2-96b2-8f348d872af0", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-k86m7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b1c1efdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.588 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.588 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61b1c1efdf7 ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.598 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.600 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k86m7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b4d94e92-3c89-4ae2-96b2-8f348d872af0", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161", Pod:"goldmane-7c778bb748-k86m7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b1c1efdf7", MAC:"96:80:b1:c1:af:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.689998 containerd[1476]: 2025-10-31 00:44:42.683 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161" Namespace="calico-system" Pod="goldmane-7c778bb748-k86m7" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:42.716108 containerd[1476]: time="2025-10-31T00:44:42.715937424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:42.716334 containerd[1476]: time="2025-10-31T00:44:42.716133702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:42.716334 containerd[1476]: time="2025-10-31T00:44:42.716156445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.716334 containerd[1476]: time="2025-10-31T00:44:42.716270950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.728753 systemd-networkd[1409]: cali58618479eae: Link UP Oct 31 00:44:42.730049 systemd-networkd[1409]: cali58618479eae: Gained carrier Oct 31 00:44:42.744361 systemd[1]: Started cri-containerd-869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161.scope - libcontainer container 869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161. Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.383 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0 calico-apiserver-5976b79f87- calico-apiserver 24efb227-abbe-46de-b752-2903fb4a14c0 1143 0 2025-10-31 00:44:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5976b79f87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5976b79f87-gzf29 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58618479eae [] [] }} ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.383 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.422 [INFO][4859] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" HandleID="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.422 [INFO][4859] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" HandleID="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035fda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5976b79f87-gzf29", "timestamp":"2025-10-31 00:44:42.422060957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.422 [INFO][4859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.581 [INFO][4859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.581 [INFO][4859] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.610 [INFO][4859] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.687 [INFO][4859] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.699 [INFO][4859] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.702 [INFO][4859] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.705 [INFO][4859] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.705 [INFO][4859] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.706 [INFO][4859] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110 Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.710 [INFO][4859] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.717 [INFO][4859] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.717 [INFO][4859] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" host="localhost" Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.717 [INFO][4859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:42.748715 containerd[1476]: 2025-10-31 00:44:42.717 [INFO][4859] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" HandleID="k8s-pod-network.5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.721 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"24efb227-abbe-46de-b752-2903fb4a14c0", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5976b79f87-gzf29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58618479eae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.722 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.722 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58618479eae ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.729 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.731 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"24efb227-abbe-46de-b752-2903fb4a14c0", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110", Pod:"calico-apiserver-5976b79f87-gzf29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58618479eae", MAC:"12:0f:10:69:49:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:42.749478 containerd[1476]: 2025-10-31 00:44:42.741 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110" Namespace="calico-apiserver" Pod="calico-apiserver-5976b79f87-gzf29" WorkloadEndpoint="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:42.762409 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:42.771374 containerd[1476]: time="2025-10-31T00:44:42.771259340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 00:44:42.771483 containerd[1476]: time="2025-10-31T00:44:42.771345872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 00:44:42.771483 containerd[1476]: time="2025-10-31T00:44:42.771361552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.771595 containerd[1476]: time="2025-10-31T00:44:42.771547842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 00:44:42.798221 systemd[1]: Started cri-containerd-5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110.scope - libcontainer container 5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110. Oct 31 00:44:42.799799 containerd[1476]: time="2025-10-31T00:44:42.799700002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-k86m7,Uid:b4d94e92-3c89-4ae2-96b2-8f348d872af0,Namespace:calico-system,Attempt:1,} returns sandbox id \"869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161\"" Oct 31 00:44:42.814275 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 00:44:42.841483 containerd[1476]: time="2025-10-31T00:44:42.841292785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5976b79f87-gzf29,Uid:24efb227-abbe-46de-b752-2903fb4a14c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110\"" Oct 31 00:44:42.984720 containerd[1476]: time="2025-10-31T00:44:42.984650021Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:42.985834 containerd[1476]: time="2025-10-31T00:44:42.985793988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:44:42.985941 containerd[1476]: time="2025-10-31T00:44:42.985879108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:42.986138 kubelet[2507]: E1031 00:44:42.986087 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:42.986138 kubelet[2507]: E1031 00:44:42.986137 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:42.986446 kubelet[2507]: E1031 00:44:42.986377 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-6bc78_calico-apiserver(059647c6-592e-403f-9d8e-2ac4b74608a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:42.986510 kubelet[2507]: E1031 00:44:42.986469 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:42.986727 containerd[1476]: time="2025-10-31T00:44:42.986652770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:44:43.336261 containerd[1476]: time="2025-10-31T00:44:43.336175543Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:43.337477 containerd[1476]: time="2025-10-31T00:44:43.337414178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:44:43.337662 containerd[1476]: time="2025-10-31T00:44:43.337502102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:43.337773 kubelet[2507]: E1031 00:44:43.337706 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:44:43.337842 kubelet[2507]: E1031 00:44:43.337777 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:44:43.338060 kubelet[2507]: E1031 00:44:43.338023 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k86m7_calico-system(b4d94e92-3c89-4ae2-96b2-8f348d872af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:43.338137 kubelet[2507]: E1031 00:44:43.338075 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:43.338253 containerd[1476]: time="2025-10-31T00:44:43.338193039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:44:43.371326 kubelet[2507]: E1031 00:44:43.371044 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:43.371326 kubelet[2507]: E1031 00:44:43.371311 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:43.373624 kubelet[2507]: E1031 00:44:43.372877 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:43.373624 kubelet[2507]: E1031 00:44:43.373058 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:44:43.384102 kubelet[2507]: E1031 00:44:43.383595 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:43.416265 systemd-networkd[1409]: cali95f14768d34: Gained IPv6LL Oct 31 00:44:43.676197 kubelet[2507]: I1031 00:44:43.676017 2507 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 00:44:43.676642 kubelet[2507]: E1031 00:44:43.676544 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:43.702299 containerd[1476]: time="2025-10-31T00:44:43.702245645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:43.705858 containerd[1476]: time="2025-10-31T00:44:43.705376292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:44:43.705858 containerd[1476]: time="2025-10-31T00:44:43.705565727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:43.705993 kubelet[2507]: E1031 00:44:43.705870 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:43.705993 kubelet[2507]: E1031 00:44:43.705961 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:43.706079 kubelet[2507]: E1031 00:44:43.706043 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-gzf29_calico-apiserver(24efb227-abbe-46de-b752-2903fb4a14c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:43.706120 kubelet[2507]: E1031 00:44:43.706084 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:44:43.876747 systemd[1]: run-containerd-runc-k8s.io-02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3-runc.f5f0Ph.mount: Deactivated successfully. Oct 31 00:44:44.120237 systemd-networkd[1409]: cali61b1c1efdf7: Gained IPv6LL Oct 31 00:44:44.120626 systemd-networkd[1409]: cali58618479eae: Gained IPv6LL Oct 31 00:44:44.184219 systemd-networkd[1409]: cali62c0971c7e8: Gained IPv6LL Oct 31 00:44:44.373263 kubelet[2507]: E1031 00:44:44.373117 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:44:44.374130 kubelet[2507]: E1031 00:44:44.373407 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:44.374130 kubelet[2507]: E1031 00:44:44.373462 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:44:44.374285 kubelet[2507]: E1031 00:44:44.373489 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:46.185246 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:42352.service - OpenSSH per-connection server daemon (10.0.0.1:42352). Oct 31 00:44:46.230628 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 42352 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:46.232836 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:46.238302 systemd-logind[1450]: New session 10 of user core. Oct 31 00:44:46.246132 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 00:44:46.496694 sshd[5080]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:46.501756 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Oct 31 00:44:46.502381 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:42352.service: Deactivated successfully. Oct 31 00:44:46.505252 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 00:44:46.506479 systemd-logind[1450]: Removed session 10. Oct 31 00:44:50.138735 containerd[1476]: time="2025-10-31T00:44:50.138293254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:44:50.469058 containerd[1476]: time="2025-10-31T00:44:50.468845683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:50.470187 containerd[1476]: time="2025-10-31T00:44:50.470137227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:44:50.470259 containerd[1476]: time="2025-10-31T00:44:50.470184105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:44:50.470433 kubelet[2507]: E1031 00:44:50.470380 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:44:50.470881 kubelet[2507]: E1031 00:44:50.470436 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:44:50.470881 kubelet[2507]: E1031 00:44:50.470530 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:50.471424 containerd[1476]: time="2025-10-31T00:44:50.471382053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:44:50.817742 containerd[1476]: time="2025-10-31T00:44:50.817704324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:50.926019 containerd[1476]: time="2025-10-31T00:44:50.925895698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:44:50.926019 containerd[1476]: time="2025-10-31T00:44:50.925963956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:44:50.926504 kubelet[2507]: E1031 00:44:50.926302 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:44:50.926504 kubelet[2507]: E1031 00:44:50.926372 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:44:50.926504 kubelet[2507]: E1031 00:44:50.926470 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:50.926670 kubelet[2507]: E1031 00:44:50.926520 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:44:51.137639 containerd[1476]: time="2025-10-31T00:44:51.137498696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:44:51.511257 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:49772.service - OpenSSH per-connection server daemon (10.0.0.1:49772). Oct 31 00:44:51.549989 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 49772 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:51.551822 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:51.555662 systemd-logind[1450]: New session 11 of user core. Oct 31 00:44:51.562071 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 00:44:51.647831 containerd[1476]: time="2025-10-31T00:44:51.647781966Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:51.649008 containerd[1476]: time="2025-10-31T00:44:51.648966258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:44:51.649098 containerd[1476]: time="2025-10-31T00:44:51.649009149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:44:51.649246 kubelet[2507]: E1031 00:44:51.649190 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:44:51.649834 kubelet[2507]: E1031 00:44:51.649247 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:44:51.649834 kubelet[2507]: E1031 00:44:51.649341 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:51.650832 containerd[1476]: time="2025-10-31T00:44:51.650633387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:44:51.681239 sshd[5105]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:51.693039 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:49772.service: Deactivated successfully. Oct 31 00:44:51.695012 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 00:44:51.697725 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Oct 31 00:44:51.704229 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:49778.service - OpenSSH per-connection server daemon (10.0.0.1:49778). Oct 31 00:44:51.705125 systemd-logind[1450]: Removed session 11. Oct 31 00:44:51.739029 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 49778 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:51.740649 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:51.744756 systemd-logind[1450]: New session 12 of user core. Oct 31 00:44:51.751044 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 00:44:51.892066 sshd[5122]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:51.902693 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:49778.service: Deactivated successfully. Oct 31 00:44:51.905263 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 00:44:51.908043 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Oct 31 00:44:51.921470 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:49786.service - OpenSSH per-connection server daemon (10.0.0.1:49786). Oct 31 00:44:51.926376 systemd-logind[1450]: Removed session 12. Oct 31 00:44:51.964089 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 49786 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:51.966049 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:51.970567 systemd-logind[1450]: New session 13 of user core. Oct 31 00:44:51.982059 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 00:44:51.997179 containerd[1476]: time="2025-10-31T00:44:51.997125070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:51.998192 containerd[1476]: time="2025-10-31T00:44:51.998159662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:44:51.998280 containerd[1476]: time="2025-10-31T00:44:51.998200338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:44:51.998450 kubelet[2507]: E1031 00:44:51.998398 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:44:51.998512 kubelet[2507]: E1031 00:44:51.998453 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:44:51.998584 kubelet[2507]: E1031 00:44:51.998556 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:51.998672 kubelet[2507]: E1031 00:44:51.998614 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:44:52.093665 sshd[5135]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:52.097708 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:49786.service: Deactivated successfully. Oct 31 00:44:52.100068 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 00:44:52.100733 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Oct 31 00:44:52.101556 systemd-logind[1450]: Removed session 13. Oct 31 00:44:54.121318 containerd[1476]: time="2025-10-31T00:44:54.120868794Z" level=info msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.162 [WARNING][5160] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--chb2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c270719-cb33-4792-9c98-48c89084c3a9", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0", Pod:"coredns-66bc5c9577-chb2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbae70475d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.163 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.163 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" iface="eth0" netns="" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.163 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.163 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.199 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.199 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.199 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.206 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.207 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.209 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.215417 containerd[1476]: 2025-10-31 00:44:54.212 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.216098 containerd[1476]: time="2025-10-31T00:44:54.215469800Z" level=info msg="TearDown network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" successfully" Oct 31 00:44:54.216098 containerd[1476]: time="2025-10-31T00:44:54.215500808Z" level=info msg="StopPodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" returns successfully" Oct 31 00:44:54.216296 containerd[1476]: time="2025-10-31T00:44:54.216243852Z" level=info msg="RemovePodSandbox for \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" Oct 31 00:44:54.219551 containerd[1476]: time="2025-10-31T00:44:54.219505572Z" level=info msg="Forcibly stopping sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\"" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.258 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--chb2v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c270719-cb33-4792-9c98-48c89084c3a9", ResourceVersion:"1124", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba3ed54539e7034749eb57f3bb7e6a7ef8c57bf48c82b8980828c47a027420e0", Pod:"coredns-66bc5c9577-chb2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbae70475d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.258 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.258 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" iface="eth0" netns="" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.258 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.258 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.281 [INFO][5197] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.281 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.282 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.289 [WARNING][5197] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.289 [INFO][5197] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" HandleID="k8s-pod-network.b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Workload="localhost-k8s-coredns--66bc5c9577--chb2v-eth0" Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.291 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.296791 containerd[1476]: 2025-10-31 00:44:54.294 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d" Oct 31 00:44:54.297369 containerd[1476]: time="2025-10-31T00:44:54.296863796Z" level=info msg="TearDown network for sandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" successfully" Oct 31 00:44:54.334021 containerd[1476]: time="2025-10-31T00:44:54.333948426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:54.334146 containerd[1476]: time="2025-10-31T00:44:54.334055888Z" level=info msg="RemovePodSandbox \"b42214565194694928564f7e0d6d836e99645d33dab0ec272df1050a389f6e3d\" returns successfully" Oct 31 00:44:54.334786 containerd[1476]: time="2025-10-31T00:44:54.334737767Z" level=info msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.373 [WARNING][5215] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rgpjr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51ae5eae-434b-4353-bdcc-818b667dd4ed", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd", Pod:"csi-node-driver-rgpjr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1ae8323217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.373 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.373 [INFO][5215] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" iface="eth0" netns="" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.373 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.373 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.402 [INFO][5224] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.403 [INFO][5224] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.403 [INFO][5224] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.410 [WARNING][5224] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.410 [INFO][5224] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.412 [INFO][5224] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.418532 containerd[1476]: 2025-10-31 00:44:54.415 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.418532 containerd[1476]: time="2025-10-31T00:44:54.418469751Z" level=info msg="TearDown network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" successfully" Oct 31 00:44:54.418532 containerd[1476]: time="2025-10-31T00:44:54.418504646Z" level=info msg="StopPodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" returns successfully" Oct 31 00:44:54.419845 containerd[1476]: time="2025-10-31T00:44:54.419808132Z" level=info msg="RemovePodSandbox for \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" Oct 31 00:44:54.419894 containerd[1476]: time="2025-10-31T00:44:54.419850080Z" level=info msg="Forcibly stopping sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\"" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.456 [WARNING][5241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rgpjr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51ae5eae-434b-4353-bdcc-818b667dd4ed", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc5a7e0b72108298278926dbc540e7cf97272882f984efff8f8958c4906fd0fd", Pod:"csi-node-driver-rgpjr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1ae8323217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.456 [INFO][5241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.456 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" iface="eth0" netns="" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.456 [INFO][5241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.456 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.478 [INFO][5250] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.478 [INFO][5250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.478 [INFO][5250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.484 [WARNING][5250] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.484 [INFO][5250] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" HandleID="k8s-pod-network.c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Workload="localhost-k8s-csi--node--driver--rgpjr-eth0" Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.486 [INFO][5250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.492162 containerd[1476]: 2025-10-31 00:44:54.489 [INFO][5241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7" Oct 31 00:44:54.495633 containerd[1476]: time="2025-10-31T00:44:54.495599966Z" level=info msg="TearDown network for sandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" successfully" Oct 31 00:44:54.511162 containerd[1476]: time="2025-10-31T00:44:54.511112552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:54.511263 containerd[1476]: time="2025-10-31T00:44:54.511175159Z" level=info msg="RemovePodSandbox \"c10fba672b715b260f1e363f01bf3c150c6df26fa553e8281b5675181c592ce7\" returns successfully" Oct 31 00:44:54.511772 containerd[1476]: time="2025-10-31T00:44:54.511739218Z" level=info msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.548 [WARNING][5268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"059647c6-592e-403f-9d8e-2ac4b74608a6", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d", Pod:"calico-apiserver-5976b79f87-6bc78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62c0971c7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.548 [INFO][5268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.548 [INFO][5268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" iface="eth0" netns="" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.548 [INFO][5268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.548 [INFO][5268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.572 [INFO][5276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.572 [INFO][5276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.572 [INFO][5276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.578 [WARNING][5276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.578 [INFO][5276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.579 [INFO][5276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.585706 containerd[1476]: 2025-10-31 00:44:54.582 [INFO][5268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.586299 containerd[1476]: time="2025-10-31T00:44:54.585770158Z" level=info msg="TearDown network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" successfully" Oct 31 00:44:54.586299 containerd[1476]: time="2025-10-31T00:44:54.585797950Z" level=info msg="StopPodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" returns successfully" Oct 31 00:44:54.586483 containerd[1476]: time="2025-10-31T00:44:54.586450205Z" level=info msg="RemovePodSandbox for \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" Oct 31 00:44:54.586525 containerd[1476]: time="2025-10-31T00:44:54.586490340Z" level=info msg="Forcibly stopping sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\"" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.623 [WARNING][5294] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"059647c6-592e-403f-9d8e-2ac4b74608a6", ResourceVersion:"1214", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11acc1246b05f9bb8de655aafcace4a2b8166280980bcd027b79829f1c9c7e2d", Pod:"calico-apiserver-5976b79f87-6bc78", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali62c0971c7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.623 [INFO][5294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.623 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" iface="eth0" netns="" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.623 [INFO][5294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.623 [INFO][5294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.648 [INFO][5303] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.648 [INFO][5303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.648 [INFO][5303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.654 [WARNING][5303] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.654 [INFO][5303] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" HandleID="k8s-pod-network.0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Workload="localhost-k8s-calico--apiserver--5976b79f87--6bc78-eth0" Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.656 [INFO][5303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.661912 containerd[1476]: 2025-10-31 00:44:54.659 [INFO][5294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0" Oct 31 00:44:54.662533 containerd[1476]: time="2025-10-31T00:44:54.661969548Z" level=info msg="TearDown network for sandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" successfully" Oct 31 00:44:54.666336 containerd[1476]: time="2025-10-31T00:44:54.666271730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:54.666420 containerd[1476]: time="2025-10-31T00:44:54.666341441Z" level=info msg="RemovePodSandbox \"0d65a860a911a4296ca88de670436bdf1fde83ea9c503fb7d12a7e65a37a83e0\" returns successfully" Oct 31 00:44:54.667042 containerd[1476]: time="2025-10-31T00:44:54.667004165Z" level=info msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.707 [WARNING][5321] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k86m7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b4d94e92-3c89-4ae2-96b2-8f348d872af0", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161", Pod:"goldmane-7c778bb748-k86m7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b1c1efdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.708 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.708 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" iface="eth0" netns="" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.708 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.708 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.733 [INFO][5330] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.733 [INFO][5330] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.733 [INFO][5330] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.740 [WARNING][5330] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.741 [INFO][5330] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.742 [INFO][5330] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.749689 containerd[1476]: 2025-10-31 00:44:54.746 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.749689 containerd[1476]: time="2025-10-31T00:44:54.749646012Z" level=info msg="TearDown network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" successfully" Oct 31 00:44:54.749689 containerd[1476]: time="2025-10-31T00:44:54.749683121Z" level=info msg="StopPodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" returns successfully" Oct 31 00:44:54.750371 containerd[1476]: time="2025-10-31T00:44:54.750325677Z" level=info msg="RemovePodSandbox for \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" Oct 31 00:44:54.750371 containerd[1476]: time="2025-10-31T00:44:54.750367927Z" level=info msg="Forcibly stopping sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\"" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.787 [WARNING][5348] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--k86m7-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b4d94e92-3c89-4ae2-96b2-8f348d872af0", ResourceVersion:"1204", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"869385f13035f20ce3b447bd6e63eac0eda41079f76d4981dcb5036a12b0e161", Pod:"goldmane-7c778bb748-k86m7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b1c1efdf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.788 [INFO][5348] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.788 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" iface="eth0" netns="" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.788 [INFO][5348] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.788 [INFO][5348] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.809 [INFO][5356] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.809 [INFO][5356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.809 [INFO][5356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.817 [WARNING][5356] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.817 [INFO][5356] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" HandleID="k8s-pod-network.5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Workload="localhost-k8s-goldmane--7c778bb748--k86m7-eth0" Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.818 [INFO][5356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.824182 containerd[1476]: 2025-10-31 00:44:54.821 [INFO][5348] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18" Oct 31 00:44:54.824743 containerd[1476]: time="2025-10-31T00:44:54.824251871Z" level=info msg="TearDown network for sandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" successfully" Oct 31 00:44:54.864859 containerd[1476]: time="2025-10-31T00:44:54.864768541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:54.864859 containerd[1476]: time="2025-10-31T00:44:54.864873528Z" level=info msg="RemovePodSandbox \"5cb111f1b40da3db487bb3bf7ed44d4969dd59b109890c252434860df4593c18\" returns successfully" Oct 31 00:44:54.865591 containerd[1476]: time="2025-10-31T00:44:54.865556460Z" level=info msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.901 [WARNING][5374] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" WorkloadEndpoint="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.901 [INFO][5374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.901 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" iface="eth0" netns="" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.901 [INFO][5374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.901 [INFO][5374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.924 [INFO][5383] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.924 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.924 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.930 [WARNING][5383] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.930 [INFO][5383] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.932 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:54.938277 containerd[1476]: 2025-10-31 00:44:54.935 [INFO][5374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:54.938675 containerd[1476]: time="2025-10-31T00:44:54.938320383Z" level=info msg="TearDown network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" successfully" Oct 31 00:44:54.938675 containerd[1476]: time="2025-10-31T00:44:54.938355269Z" level=info msg="StopPodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" returns successfully" Oct 31 00:44:54.939032 containerd[1476]: time="2025-10-31T00:44:54.938992064Z" level=info msg="RemovePodSandbox for \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" Oct 31 00:44:54.939032 containerd[1476]: time="2025-10-31T00:44:54.939042388Z" level=info msg="Forcibly stopping sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\"" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:54.978 [WARNING][5401] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" WorkloadEndpoint="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:54.978 [INFO][5401] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:54.978 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" iface="eth0" netns="" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:54.978 [INFO][5401] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:54.978 [INFO][5401] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.001 [INFO][5409] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.001 [INFO][5409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.001 [INFO][5409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.007 [WARNING][5409] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.007 [INFO][5409] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" HandleID="k8s-pod-network.830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Workload="localhost-k8s-whisker--75d855d4bf--vqwbm-eth0" Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.009 [INFO][5409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.015698 containerd[1476]: 2025-10-31 00:44:55.011 [INFO][5401] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573" Oct 31 00:44:55.015698 containerd[1476]: time="2025-10-31T00:44:55.014634628Z" level=info msg="TearDown network for sandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" successfully" Oct 31 00:44:55.058574 containerd[1476]: time="2025-10-31T00:44:55.058504066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:55.058787 containerd[1476]: time="2025-10-31T00:44:55.058587783Z" level=info msg="RemovePodSandbox \"830d75a2aabae8cbeada6d25cbb9941aa2d820be7bb37edc4fde441de6826573\" returns successfully" Oct 31 00:44:55.059317 containerd[1476]: time="2025-10-31T00:44:55.059265425Z" level=info msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.094 [WARNING][5426] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"24efb227-abbe-46de-b752-2903fb4a14c0", ResourceVersion:"1208", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110", Pod:"calico-apiserver-5976b79f87-gzf29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58618479eae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.094 [INFO][5426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.094 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" iface="eth0" netns="" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.094 [INFO][5426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.094 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.117 [INFO][5434] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.117 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.117 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.123 [WARNING][5434] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.123 [INFO][5434] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.125 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.131052 containerd[1476]: 2025-10-31 00:44:55.128 [INFO][5426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.131814 containerd[1476]: time="2025-10-31T00:44:55.131101403Z" level=info msg="TearDown network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" successfully" Oct 31 00:44:55.131814 containerd[1476]: time="2025-10-31T00:44:55.131130127Z" level=info msg="StopPodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" returns successfully" Oct 31 00:44:55.131814 containerd[1476]: time="2025-10-31T00:44:55.131697903Z" level=info msg="RemovePodSandbox for \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" Oct 31 00:44:55.131814 containerd[1476]: time="2025-10-31T00:44:55.131735463Z" level=info msg="Forcibly stopping sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\"" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.168 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0", GenerateName:"calico-apiserver-5976b79f87-", Namespace:"calico-apiserver", SelfLink:"", UID:"24efb227-abbe-46de-b752-2903fb4a14c0", ResourceVersion:"1208", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5976b79f87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c5abfc7381fd1bb8a1a5c6bd71e884a6d92f06ca112e5fed46adb7c07672110", Pod:"calico-apiserver-5976b79f87-gzf29", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58618479eae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.168 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.168 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" iface="eth0" netns="" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.168 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.168 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.191 [INFO][5461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.192 [INFO][5461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.192 [INFO][5461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.199 [WARNING][5461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.199 [INFO][5461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" HandleID="k8s-pod-network.293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Workload="localhost-k8s-calico--apiserver--5976b79f87--gzf29-eth0" Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.200 [INFO][5461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.206851 containerd[1476]: 2025-10-31 00:44:55.203 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c" Oct 31 00:44:55.207399 containerd[1476]: time="2025-10-31T00:44:55.206903202Z" level=info msg="TearDown network for sandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" successfully" Oct 31 00:44:55.211078 containerd[1476]: time="2025-10-31T00:44:55.211042730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:55.211130 containerd[1476]: time="2025-10-31T00:44:55.211093765Z" level=info msg="RemovePodSandbox \"293d9f6eef2b32ffa26b7dab428218e40e10229744197f58e1a6dfd46afff78c\" returns successfully" Oct 31 00:44:55.211595 containerd[1476]: time="2025-10-31T00:44:55.211558607Z" level=info msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.248 [WARNING][5479] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0", GenerateName:"calico-kube-controllers-78c45f6ffd-", Namespace:"calico-system", SelfLink:"", UID:"249ea698-09d4-4be0-8fe6-e2048ed71a8b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c45f6ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1", Pod:"calico-kube-controllers-78c45f6ffd-bcknz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95f14768d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.248 [INFO][5479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.248 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" iface="eth0" netns="" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.248 [INFO][5479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.248 [INFO][5479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.269 [INFO][5488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.269 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.269 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.274 [WARNING][5488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.274 [INFO][5488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.276 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.281613 containerd[1476]: 2025-10-31 00:44:55.278 [INFO][5479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.281613 containerd[1476]: time="2025-10-31T00:44:55.281579210Z" level=info msg="TearDown network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" successfully" Oct 31 00:44:55.281613 containerd[1476]: time="2025-10-31T00:44:55.281611260Z" level=info msg="StopPodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" returns successfully" Oct 31 00:44:55.282325 containerd[1476]: time="2025-10-31T00:44:55.282291607Z" level=info msg="RemovePodSandbox for \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" Oct 31 00:44:55.282384 containerd[1476]: time="2025-10-31T00:44:55.282331992Z" level=info msg="Forcibly stopping sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\"" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.317 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0", GenerateName:"calico-kube-controllers-78c45f6ffd-", Namespace:"calico-system", SelfLink:"", UID:"249ea698-09d4-4be0-8fe6-e2048ed71a8b", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c45f6ffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"205fbb742048873b814000b90185bfa5e81469ec35d7bd2c3636c62963b153f1", Pod:"calico-kube-controllers-78c45f6ffd-bcknz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95f14768d34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.318 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.318 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" iface="eth0" netns="" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.318 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.318 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.338 [INFO][5514] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.338 [INFO][5514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.338 [INFO][5514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.345 [WARNING][5514] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.345 [INFO][5514] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" HandleID="k8s-pod-network.6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Workload="localhost-k8s-calico--kube--controllers--78c45f6ffd--bcknz-eth0" Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.347 [INFO][5514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.353141 containerd[1476]: 2025-10-31 00:44:55.350 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa" Oct 31 00:44:55.353598 containerd[1476]: time="2025-10-31T00:44:55.353197190Z" level=info msg="TearDown network for sandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" successfully" Oct 31 00:44:55.357489 containerd[1476]: time="2025-10-31T00:44:55.357459075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:55.357546 containerd[1476]: time="2025-10-31T00:44:55.357502908Z" level=info msg="RemovePodSandbox \"6ce870ee8e97f44dc45964e6f5decaa14e10f3c1e610da926961040e94cb3afa\" returns successfully" Oct 31 00:44:55.358069 containerd[1476]: time="2025-10-31T00:44:55.358033634Z" level=info msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.398 [WARNING][5533] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--klqvh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fbbd014-0e03-4481-92c6-93eea54eedf4", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2", Pod:"coredns-66bc5c9577-klqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54582a20f38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.398 [INFO][5533] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.398 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" iface="eth0" netns="" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.398 [INFO][5533] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.398 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.423 [INFO][5542] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.423 [INFO][5542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.424 [INFO][5542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.429 [WARNING][5542] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.429 [INFO][5542] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.431 [INFO][5542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.436726 containerd[1476]: 2025-10-31 00:44:55.433 [INFO][5533] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.437207 containerd[1476]: time="2025-10-31T00:44:55.436770920Z" level=info msg="TearDown network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" successfully" Oct 31 00:44:55.437207 containerd[1476]: time="2025-10-31T00:44:55.436799284Z" level=info msg="StopPodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" returns successfully" Oct 31 00:44:55.437407 containerd[1476]: time="2025-10-31T00:44:55.437374012Z" level=info msg="RemovePodSandbox for \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" Oct 31 00:44:55.437443 containerd[1476]: time="2025-10-31T00:44:55.437405742Z" level=info msg="Forcibly stopping sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\"" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.471 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--klqvh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1fbbd014-0e03-4481-92c6-93eea54eedf4", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 0, 44, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"030854f8989d66d51ea7e53de9e592a05abdd22c9b469233a1522afd8134d8e2", Pod:"coredns-66bc5c9577-klqvh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali54582a20f38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.471 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.471 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" iface="eth0" netns="" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.471 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.471 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.497 [INFO][5571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.497 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.498 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.505 [WARNING][5571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.505 [INFO][5571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" HandleID="k8s-pod-network.7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Workload="localhost-k8s-coredns--66bc5c9577--klqvh-eth0" Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.508 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 00:44:55.515372 containerd[1476]: 2025-10-31 00:44:55.512 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444" Oct 31 00:44:55.516064 containerd[1476]: time="2025-10-31T00:44:55.515430131Z" level=info msg="TearDown network for sandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" successfully" Oct 31 00:44:55.523785 containerd[1476]: time="2025-10-31T00:44:55.523726767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 00:44:55.523872 containerd[1476]: time="2025-10-31T00:44:55.523805745Z" level=info msg="RemovePodSandbox \"7e94dd1c5ae223d5bb7f17c93b40613ae15628eeed4d0076a6672fac48dd0444\" returns successfully" Oct 31 00:44:56.138836 containerd[1476]: time="2025-10-31T00:44:56.138770345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:44:56.509837 containerd[1476]: time="2025-10-31T00:44:56.509664508Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:56.511267 containerd[1476]: time="2025-10-31T00:44:56.510877112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:44:56.511267 containerd[1476]: time="2025-10-31T00:44:56.510918259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:56.511405 kubelet[2507]: E1031 00:44:56.511147 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:56.511405 kubelet[2507]: E1031 00:44:56.511204 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:56.511405 kubelet[2507]: E1031 00:44:56.511293 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-gzf29_calico-apiserver(24efb227-abbe-46de-b752-2903fb4a14c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:56.511973 kubelet[2507]: E1031 00:44:56.511338 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:44:57.115104 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:49800.service - OpenSSH per-connection server daemon (10.0.0.1:49800). Oct 31 00:44:57.137367 containerd[1476]: time="2025-10-31T00:44:57.137332955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:44:57.160148 sshd[5581]: Accepted publickey for core from 10.0.0.1 port 49800 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:44:57.163003 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:44:57.168515 systemd-logind[1450]: New session 14 of user core. Oct 31 00:44:57.177231 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 00:44:57.313735 sshd[5581]: pam_unix(sshd:session): session closed for user core Oct 31 00:44:57.317530 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:49800.service: Deactivated successfully. Oct 31 00:44:57.319611 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 00:44:57.320231 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Oct 31 00:44:57.321102 systemd-logind[1450]: Removed session 14. Oct 31 00:44:57.472069 containerd[1476]: time="2025-10-31T00:44:57.471906695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:57.473022 containerd[1476]: time="2025-10-31T00:44:57.472988562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:44:57.473093 containerd[1476]: time="2025-10-31T00:44:57.473011978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:57.473189 kubelet[2507]: E1031 00:44:57.473134 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:44:57.473189 kubelet[2507]: E1031 00:44:57.473174 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:44:57.473320 kubelet[2507]: E1031 00:44:57.473264 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k86m7_calico-system(b4d94e92-3c89-4ae2-96b2-8f348d872af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:57.473320 kubelet[2507]: E1031 00:44:57.473295 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:44:58.140316 containerd[1476]: time="2025-10-31T00:44:58.140260209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:44:58.471868 containerd[1476]: time="2025-10-31T00:44:58.471727489Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:58.472856 containerd[1476]: time="2025-10-31T00:44:58.472821458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:44:58.473204 containerd[1476]: time="2025-10-31T00:44:58.472905921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:44:58.473258 kubelet[2507]: E1031 00:44:58.473098 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:58.473258 kubelet[2507]: E1031 00:44:58.473152 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:44:58.473514 kubelet[2507]: E1031 00:44:58.473342 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-6bc78_calico-apiserver(059647c6-592e-403f-9d8e-2ac4b74608a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:58.473514 kubelet[2507]: E1031 00:44:58.473389 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:44:58.473597 containerd[1476]: time="2025-10-31T00:44:58.473568584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:44:58.819617 containerd[1476]: time="2025-10-31T00:44:58.819570262Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:44:58.820867 containerd[1476]: time="2025-10-31T00:44:58.820829630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:44:58.820968 containerd[1476]: time="2025-10-31T00:44:58.820908313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:44:58.821106 kubelet[2507]: E1031 00:44:58.821058 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:44:58.821175 kubelet[2507]: E1031 00:44:58.821112 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:44:58.821237 kubelet[2507]: E1031 00:44:58.821210 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-78c45f6ffd-bcknz_calico-system(249ea698-09d4-4be0-8fe6-e2048ed71a8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:44:58.821442 kubelet[2507]: E1031 00:44:58.821267 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:45:02.326421 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:50888.service - OpenSSH per-connection server daemon (10.0.0.1:50888). Oct 31 00:45:02.364345 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 50888 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:02.366048 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:02.370406 systemd-logind[1450]: New session 15 of user core. Oct 31 00:45:02.380095 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 00:45:02.495396 sshd[5603]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:02.499985 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:50888.service: Deactivated successfully. Oct 31 00:45:02.503087 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 00:45:02.503977 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Oct 31 00:45:02.504956 systemd-logind[1450]: Removed session 15. Oct 31 00:45:05.137979 kubelet[2507]: E1031 00:45:05.137634 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:45:05.137979 kubelet[2507]: E1031 00:45:05.137700 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:45:07.507984 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:50900.service - OpenSSH per-connection server daemon (10.0.0.1:50900). Oct 31 00:45:07.546867 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 50900 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:07.548585 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:07.554308 systemd-logind[1450]: New session 16 of user core. Oct 31 00:45:07.568231 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 00:45:07.691961 sshd[5619]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:07.697189 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:50900.service: Deactivated successfully. Oct 31 00:45:07.701051 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 00:45:07.701884 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Oct 31 00:45:07.702956 systemd-logind[1450]: Removed session 16. Oct 31 00:45:08.136966 kubelet[2507]: E1031 00:45:08.136886 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:10.137073 kubelet[2507]: E1031 00:45:10.136870 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:45:10.137073 kubelet[2507]: E1031 00:45:10.136875 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:45:10.141475 kubelet[2507]: E1031 00:45:10.137769 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:45:11.136747 kubelet[2507]: E1031 00:45:11.136682 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:45:12.706319 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:40248.service - OpenSSH per-connection server daemon (10.0.0.1:40248). Oct 31 00:45:12.748057 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 40248 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:12.749894 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:12.755731 systemd-logind[1450]: New session 17 of user core. Oct 31 00:45:12.763132 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 00:45:12.880961 sshd[5633]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:12.885426 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:40248.service: Deactivated successfully. Oct 31 00:45:12.888776 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 00:45:12.889546 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Oct 31 00:45:12.890627 systemd-logind[1450]: Removed session 17. Oct 31 00:45:13.136102 kubelet[2507]: E1031 00:45:13.136060 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:13.878047 systemd[1]: run-containerd-runc-k8s.io-02ea2b01bb0ce73d3d1a856e4a4447de0a92c1b4feff9bd6f5529c561e50aca3-runc.605BKS.mount: Deactivated successfully. Oct 31 00:45:17.892180 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). Oct 31 00:45:17.943416 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:17.945411 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:17.949662 systemd-logind[1450]: New session 18 of user core. Oct 31 00:45:17.955049 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 00:45:18.074461 sshd[5674]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:18.084287 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:40250.service: Deactivated successfully. Oct 31 00:45:18.086676 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 00:45:18.090132 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Oct 31 00:45:18.095201 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:40260.service - OpenSSH per-connection server daemon (10.0.0.1:40260). Oct 31 00:45:18.096174 systemd-logind[1450]: Removed session 18. Oct 31 00:45:18.133130 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 40260 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:18.134914 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:18.139038 containerd[1476]: time="2025-10-31T00:45:18.139002553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 00:45:18.144195 systemd-logind[1450]: New session 19 of user core. Oct 31 00:45:18.153589 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 00:45:18.476265 containerd[1476]: time="2025-10-31T00:45:18.476111875Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:18.478214 containerd[1476]: time="2025-10-31T00:45:18.478140762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 00:45:18.478440 containerd[1476]: time="2025-10-31T00:45:18.478182071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 00:45:18.478476 kubelet[2507]: E1031 00:45:18.478426 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:45:18.478950 kubelet[2507]: E1031 00:45:18.478485 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 00:45:18.478950 kubelet[2507]: E1031 00:45:18.478689 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:18.479009 containerd[1476]: time="2025-10-31T00:45:18.478902698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 00:45:18.555049 sshd[5689]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:18.569623 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:40260.service: Deactivated successfully. Oct 31 00:45:18.572341 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 00:45:18.574419 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Oct 31 00:45:18.586256 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). Oct 31 00:45:18.587423 systemd-logind[1450]: Removed session 19. Oct 31 00:45:18.627101 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:18.629233 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:18.633929 systemd-logind[1450]: New session 20 of user core. Oct 31 00:45:18.644184 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 00:45:18.796003 containerd[1476]: time="2025-10-31T00:45:18.795723906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:18.798219 containerd[1476]: time="2025-10-31T00:45:18.798141757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 00:45:18.798347 containerd[1476]: time="2025-10-31T00:45:18.798190420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 00:45:18.798535 kubelet[2507]: E1031 00:45:18.798487 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:45:18.798601 kubelet[2507]: E1031 00:45:18.798549 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 00:45:18.798814 kubelet[2507]: E1031 00:45:18.798790 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:18.799382 containerd[1476]: time="2025-10-31T00:45:18.799228894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 00:45:19.149720 containerd[1476]: time="2025-10-31T00:45:19.149557405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:19.151899 containerd[1476]: time="2025-10-31T00:45:19.151835978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 00:45:19.152093 containerd[1476]: time="2025-10-31T00:45:19.151918976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 00:45:19.152187 kubelet[2507]: E1031 00:45:19.152141 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:45:19.152268 kubelet[2507]: E1031 00:45:19.152198 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 00:45:19.152568 kubelet[2507]: E1031 00:45:19.152433 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bcfbfb9d5-d68xk_calico-system(905b0eae-aa04-4c59-afcc-92bdce8d8829): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:19.152568 kubelet[2507]: E1031 00:45:19.152504 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:45:19.152712 containerd[1476]: time="2025-10-31T00:45:19.152547627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 00:45:19.337391 sshd[5702]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:19.348367 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:40266.service: Deactivated successfully. Oct 31 00:45:19.351998 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 00:45:19.354512 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Oct 31 00:45:19.364337 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:40268.service - OpenSSH per-connection server daemon (10.0.0.1:40268). Oct 31 00:45:19.365997 systemd-logind[1450]: Removed session 20. Oct 31 00:45:19.407438 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 40268 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:19.409483 sshd[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:19.414779 systemd-logind[1450]: New session 21 of user core. Oct 31 00:45:19.430245 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 00:45:19.515601 containerd[1476]: time="2025-10-31T00:45:19.515543574Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:19.516932 containerd[1476]: time="2025-10-31T00:45:19.516886570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 00:45:19.517078 containerd[1476]: time="2025-10-31T00:45:19.516968716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 00:45:19.517180 kubelet[2507]: E1031 00:45:19.517132 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:45:19.517583 kubelet[2507]: E1031 00:45:19.517190 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 00:45:19.517583 kubelet[2507]: E1031 00:45:19.517277 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rgpjr_calico-system(51ae5eae-434b-4353-bdcc-818b667dd4ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:19.517583 kubelet[2507]: E1031 00:45:19.517318 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:45:19.659169 sshd[5726]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:19.671212 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:40268.service: Deactivated successfully. Oct 31 00:45:19.673464 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 00:45:19.675403 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Oct 31 00:45:19.682624 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:40274.service - OpenSSH per-connection server daemon (10.0.0.1:40274). Oct 31 00:45:19.683679 systemd-logind[1450]: Removed session 21. Oct 31 00:45:19.716619 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 40274 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:19.718846 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:19.724843 systemd-logind[1450]: New session 22 of user core. Oct 31 00:45:19.731222 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 00:45:19.860838 sshd[5739]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:19.865998 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:40274.service: Deactivated successfully. Oct 31 00:45:19.868647 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 00:45:19.869482 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Oct 31 00:45:19.870598 systemd-logind[1450]: Removed session 22. Oct 31 00:45:22.138068 containerd[1476]: time="2025-10-31T00:45:22.138018790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:45:22.649805 containerd[1476]: time="2025-10-31T00:45:22.649755067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:22.651274 containerd[1476]: time="2025-10-31T00:45:22.651236373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:45:22.651394 containerd[1476]: time="2025-10-31T00:45:22.651317638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:45:22.651536 kubelet[2507]: E1031 00:45:22.651488 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:45:22.652021 kubelet[2507]: E1031 00:45:22.651550 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:45:22.652021 kubelet[2507]: E1031 00:45:22.651794 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-gzf29_calico-apiserver(24efb227-abbe-46de-b752-2903fb4a14c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:22.652021 kubelet[2507]: E1031 00:45:22.651851 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0" Oct 31 00:45:22.652358 containerd[1476]: time="2025-10-31T00:45:22.652334397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 00:45:23.002997 containerd[1476]: time="2025-10-31T00:45:23.002780533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:23.071683 containerd[1476]: time="2025-10-31T00:45:23.071572703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 00:45:23.071832 containerd[1476]: time="2025-10-31T00:45:23.071630662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 00:45:23.072020 kubelet[2507]: E1031 00:45:23.071955 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:45:23.072020 kubelet[2507]: E1031 00:45:23.072014 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 00:45:23.072150 kubelet[2507]: E1031 00:45:23.072103 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-k86m7_calico-system(b4d94e92-3c89-4ae2-96b2-8f348d872af0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:23.072150 kubelet[2507]: E1031 00:45:23.072135 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-k86m7" podUID="b4d94e92-3c89-4ae2-96b2-8f348d872af0" Oct 31 00:45:24.137318 kubelet[2507]: E1031 00:45:24.137201 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:24.138050 containerd[1476]: time="2025-10-31T00:45:24.138007254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 00:45:24.475494 containerd[1476]: time="2025-10-31T00:45:24.475119120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:24.480468 containerd[1476]: time="2025-10-31T00:45:24.480431476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 00:45:24.480542 containerd[1476]: time="2025-10-31T00:45:24.480466453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 00:45:24.480636 kubelet[2507]: E1031 00:45:24.480589 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:45:24.480708 kubelet[2507]: E1031 00:45:24.480637 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 00:45:24.480787 kubelet[2507]: E1031 00:45:24.480753 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5976b79f87-6bc78_calico-apiserver(059647c6-592e-403f-9d8e-2ac4b74608a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:24.480837 kubelet[2507]: E1031 00:45:24.480805 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:45:24.881110 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:59878.service - OpenSSH per-connection server daemon (10.0.0.1:59878). Oct 31 00:45:24.919837 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 59878 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:24.922050 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:24.926556 systemd-logind[1450]: New session 23 of user core. Oct 31 00:45:24.934080 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 00:45:25.048888 sshd[5760]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:25.052916 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:59878.service: Deactivated successfully. Oct 31 00:45:25.055357 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 00:45:25.056218 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Oct 31 00:45:25.057322 systemd-logind[1450]: Removed session 23. Oct 31 00:45:25.137290 containerd[1476]: time="2025-10-31T00:45:25.137139137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 00:45:25.490471 containerd[1476]: time="2025-10-31T00:45:25.490318255Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 00:45:25.530751 containerd[1476]: time="2025-10-31T00:45:25.530676323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 00:45:25.530945 containerd[1476]: time="2025-10-31T00:45:25.530709015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 00:45:25.531091 kubelet[2507]: E1031 00:45:25.531026 2507 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:45:25.531522 kubelet[2507]: E1031 00:45:25.531091 2507 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 00:45:25.531522 kubelet[2507]: E1031 00:45:25.531205 2507 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-78c45f6ffd-bcknz_calico-system(249ea698-09d4-4be0-8fe6-e2048ed71a8b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 00:45:25.531522 kubelet[2507]: E1031 00:45:25.531252 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:45:28.137002 kubelet[2507]: E1031 00:45:28.136902 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 00:45:30.063854 systemd[1]: Started sshd@23-10.0.0.107:22-10.0.0.1:42678.service - OpenSSH per-connection server daemon (10.0.0.1:42678). Oct 31 00:45:30.110817 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 42678 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:30.114451 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:30.119547 systemd-logind[1450]: New session 24 of user core. Oct 31 00:45:30.126067 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 00:45:30.265236 sshd[5775]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:30.273217 systemd[1]: sshd@23-10.0.0.107:22-10.0.0.1:42678.service: Deactivated successfully. Oct 31 00:45:30.278488 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 00:45:30.280719 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Oct 31 00:45:30.282359 systemd-logind[1450]: Removed session 24. Oct 31 00:45:35.138088 kubelet[2507]: E1031 00:45:35.138030 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-6bc78" podUID="059647c6-592e-403f-9d8e-2ac4b74608a6" Oct 31 00:45:35.139294 kubelet[2507]: E1031 00:45:35.139206 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rgpjr" podUID="51ae5eae-434b-4353-bdcc-818b667dd4ed" Oct 31 00:45:35.139294 kubelet[2507]: E1031 00:45:35.139240 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bcfbfb9d5-d68xk" podUID="905b0eae-aa04-4c59-afcc-92bdce8d8829" Oct 31 00:45:35.282269 systemd[1]: Started sshd@24-10.0.0.107:22-10.0.0.1:42684.service - OpenSSH per-connection server daemon (10.0.0.1:42684). Oct 31 00:45:35.325564 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 42684 ssh2: RSA SHA256:fxbg+cDPAGAOxNy6Apu5lF9WK7GP5km5dh02Op5u+wc Oct 31 00:45:35.327703 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 00:45:35.332880 systemd-logind[1450]: New session 25 of user core. Oct 31 00:45:35.342122 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 00:45:35.483422 sshd[5792]: pam_unix(sshd:session): session closed for user core Oct 31 00:45:35.488591 systemd[1]: sshd@24-10.0.0.107:22-10.0.0.1:42684.service: Deactivated successfully. Oct 31 00:45:35.491721 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 00:45:35.492813 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Oct 31 00:45:35.494272 systemd-logind[1450]: Removed session 25. Oct 31 00:45:36.138163 kubelet[2507]: E1031 00:45:36.138091 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c45f6ffd-bcknz" podUID="249ea698-09d4-4be0-8fe6-e2048ed71a8b" Oct 31 00:45:36.138163 kubelet[2507]: E1031 00:45:36.138133 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5976b79f87-gzf29" podUID="24efb227-abbe-46de-b752-2903fb4a14c0"