Nov 8 00:16:03.977422 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:16:03.977449 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:16:03.977464 kernel: BIOS-provided physical RAM map: Nov 8 00:16:03.977473 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:16:03.977482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:16:03.977490 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:16:03.977501 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 8 00:16:03.977510 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 8 00:16:03.977519 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:16:03.977530 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:16:03.977540 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:16:03.977548 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:16:03.977562 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:16:03.977571 kernel: NX (Execute Disable) protection: active Nov 8 00:16:03.977582 kernel: APIC: Static calls initialized Nov 8 00:16:03.977598 kernel: SMBIOS 2.8 present. Nov 8 00:16:03.977608 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 8 00:16:03.977618 kernel: Hypervisor detected: KVM Nov 8 00:16:03.977627 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:16:03.977637 kernel: kvm-clock: using sched offset of 3058767310 cycles Nov 8 00:16:03.977647 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:16:03.977657 kernel: tsc: Detected 2794.748 MHz processor Nov 8 00:16:03.977667 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:16:03.977678 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:16:03.977688 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 8 00:16:03.977701 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:16:03.977712 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:16:03.977722 kernel: Using GB pages for direct mapping Nov 8 00:16:03.977732 kernel: ACPI: Early table checksum verification disabled Nov 8 00:16:03.977742 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 8 00:16:03.977752 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977762 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977772 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977785 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 8 00:16:03.977795 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977805 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977815 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977825 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:16:03.977835 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 8 00:16:03.977845 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 8 00:16:03.977860 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 8 00:16:03.977873 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 8 00:16:03.977883 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 8 00:16:03.977894 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 8 00:16:03.977905 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 8 00:16:03.977915 kernel: No NUMA configuration found Nov 8 00:16:03.977925 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 8 00:16:03.977945 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 8 00:16:03.977959 kernel: Zone ranges: Nov 8 00:16:03.977969 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:16:03.977980 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 8 00:16:03.977990 kernel: Normal empty Nov 8 00:16:03.978000 kernel: Movable zone start for each node Nov 8 00:16:03.978011 kernel: Early memory node ranges Nov 8 00:16:03.978021 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:16:03.978031 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 8 00:16:03.978042 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 8 00:16:03.978056 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:16:03.978069 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:16:03.978079 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:16:03.978090 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:16:03.978100 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:16:03.978111 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:16:03.978121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:16:03.978132 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:16:03.978142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:16:03.978156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:16:03.978166 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:16:03.978177 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:16:03.978187 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:16:03.978198 kernel: TSC deadline timer available Nov 8 00:16:03.978208 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 8 00:16:03.978219 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:16:03.978229 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:16:03.978242 kernel: kvm-guest: setup PV sched yield Nov 8 00:16:03.978255 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:16:03.978266 kernel: Booting paravirtualized kernel on KVM Nov 8 00:16:03.978289 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:16:03.978301 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 8 00:16:03.978311 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u524288 Nov 8 00:16:03.978322 kernel: pcpu-alloc: s196712 r8192 d32664 u524288 alloc=1*2097152 Nov 8 00:16:03.978332 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 8 00:16:03.978342 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:16:03.978353 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:16:03.978369 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:16:03.978380 kernel: random: crng init done Nov 8 00:16:03.978390 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:16:03.978401 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:16:03.978412 kernel: Fallback order for Node 0: 0 Nov 8 00:16:03.978422 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 8 00:16:03.978433 kernel: Policy zone: DMA32 Nov 8 00:16:03.978443 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:16:03.978457 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 136900K reserved, 0K cma-reserved) Nov 8 00:16:03.978468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:16:03.978478 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:16:03.978489 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:16:03.978499 kernel: Dynamic Preempt: voluntary Nov 8 00:16:03.978510 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:16:03.978521 kernel: rcu: RCU event tracing is enabled. Nov 8 00:16:03.978532 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:16:03.978542 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:16:03.978556 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:16:03.978567 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:16:03.978577 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:16:03.978588 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:16:03.978601 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 8 00:16:03.978612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:16:03.978622 kernel: Console: colour VGA+ 80x25 Nov 8 00:16:03.978633 kernel: printk: console [ttyS0] enabled Nov 8 00:16:03.978643 kernel: ACPI: Core revision 20230628 Nov 8 00:16:03.978654 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:16:03.978667 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:16:03.978678 kernel: x2apic enabled Nov 8 00:16:03.978688 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:16:03.978699 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:16:03.978710 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:16:03.978720 kernel: kvm-guest: setup PV IPIs Nov 8 00:16:03.978731 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:16:03.978754 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:16:03.978765 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 8 00:16:03.978776 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:16:03.978787 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:16:03.978800 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:16:03.978811 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:16:03.978822 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:16:03.978833 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:16:03.978844 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:16:03.978858 kernel: active return thunk: retbleed_return_thunk Nov 8 00:16:03.978869 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:16:03.978883 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:16:03.978894 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:16:03.978905 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:16:03.978916 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:16:03.978937 kernel: active return thunk: srso_return_thunk Nov 8 00:16:03.978961 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:16:03.978993 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:16:03.979013 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:16:03.979034 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:16:03.979056 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:16:03.979078 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:16:03.979095 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:16:03.979104 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:16:03.979114 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:16:03.979123 kernel: landlock: Up and running. Nov 8 00:16:03.979136 kernel: SELinux: Initializing. Nov 8 00:16:03.979145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:16:03.979155 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:16:03.979165 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:16:03.979175 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:16:03.979186 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:16:03.979196 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:16:03.979206 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:16:03.979221 kernel: ... version: 0 Nov 8 00:16:03.979235 kernel: ... bit width: 48 Nov 8 00:16:03.979245 kernel: ... generic registers: 6 Nov 8 00:16:03.979256 kernel: ... value mask: 0000ffffffffffff Nov 8 00:16:03.979266 kernel: ... max period: 00007fffffffffff Nov 8 00:16:03.979289 kernel: ... fixed-purpose events: 0 Nov 8 00:16:03.979300 kernel: ... event mask: 000000000000003f Nov 8 00:16:03.979310 kernel: signal: max sigframe size: 1776 Nov 8 00:16:03.979319 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:16:03.979330 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:16:03.979345 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:16:03.979355 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:16:03.979366 kernel: .... node #0, CPUs: #1 #2 #3 Nov 8 00:16:03.979376 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:16:03.979386 kernel: smpboot: Max logical packages: 1 Nov 8 00:16:03.979397 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 8 00:16:03.979407 kernel: devtmpfs: initialized Nov 8 00:16:03.979417 kernel: x86/mm: Memory block size: 128MB Nov 8 00:16:03.979428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:16:03.979441 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:16:03.979452 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:16:03.979462 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:16:03.979472 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:16:03.979483 kernel: audit: type=2000 audit(1762560963.279:1): state=initialized audit_enabled=0 res=1 Nov 8 00:16:03.979502 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:16:03.979517 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:16:03.979536 kernel: cpuidle: using governor menu Nov 8 00:16:03.979548 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:16:03.979563 kernel: dca service started, version 1.12.1 Nov 8 00:16:03.979574 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:16:03.979585 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:16:03.979596 kernel: PCI: Using configuration type 1 for base access Nov 8 00:16:03.979607 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:16:03.979618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:16:03.979629 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:16:03.979639 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:16:03.979649 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:16:03.979664 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:16:03.979674 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:16:03.979685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:16:03.979695 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:16:03.979706 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:16:03.979717 kernel: ACPI: Interpreter enabled Nov 8 00:16:03.979727 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:16:03.979737 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:16:03.979748 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:16:03.979763 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:16:03.979775 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:16:03.979786 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:16:03.980079 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:16:03.980253 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:16:03.980434 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:16:03.980450 kernel: PCI host bridge to bus 0000:00 Nov 8 00:16:03.980616 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:16:03.980768 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:16:03.980912 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:16:03.981140 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 8 00:16:03.981330 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:16:03.981486 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:16:03.981635 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:16:03.981837 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:16:03.982030 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:16:03.982195 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:16:03.982466 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:16:03.982634 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:16:03.982794 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:16:03.982998 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:16:03.983172 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:16:03.983369 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:16:03.983533 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:16:03.983730 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:16:03.983897 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:16:03.984072 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:16:03.984236 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:16:03.984441 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:16:03.984610 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 8 00:16:03.984772 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:16:03.984944 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 8 00:16:03.985115 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:16:03.985308 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:16:03.985484 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:16:03.985665 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:16:03.985829 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 8 00:16:03.986047 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 8 00:16:03.986214 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:16:03.986394 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:16:03.986410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:16:03.986424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:16:03.986432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:16:03.986440 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:16:03.986448 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:16:03.986455 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:16:03.986464 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:16:03.986472 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:16:03.986480 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:16:03.986487 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:16:03.986498 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:16:03.986506 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:16:03.986514 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:16:03.986521 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:16:03.986529 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:16:03.986537 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:16:03.986545 kernel: iommu: Default domain type: Translated Nov 8 00:16:03.986553 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:16:03.986561 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:16:03.986571 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:16:03.986579 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:16:03.986587 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 8 00:16:03.986719 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:16:03.986848 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:16:03.986989 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:16:03.987000 kernel: vgaarb: loaded Nov 8 00:16:03.987008 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:16:03.987020 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:16:03.987028 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:16:03.987036 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:16:03.987044 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:16:03.987052 kernel: pnp: PnP ACPI init Nov 8 00:16:03.987207 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:16:03.987219 kernel: pnp: PnP ACPI: found 6 devices Nov 8 00:16:03.987227 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:16:03.987235 kernel: NET: Registered PF_INET protocol family Nov 8 00:16:03.987246 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:16:03.987254 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:16:03.987262 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:16:03.987270 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:16:03.987356 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:16:03.987368 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:16:03.987379 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:16:03.987389 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:16:03.987405 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:16:03.987416 kernel: NET: Registered PF_XDP protocol family Nov 8 00:16:03.987572 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:16:03.987693 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:16:03.987809 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:16:03.987925 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 8 00:16:03.988060 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:16:03.988175 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:16:03.988186 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:16:03.988200 kernel: Initialise system trusted keyrings Nov 8 00:16:03.988207 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:16:03.988215 kernel: Key type asymmetric registered Nov 8 00:16:03.988223 kernel: Asymmetric key parser 'x509' registered Nov 8 00:16:03.988231 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:16:03.988239 kernel: io scheduler mq-deadline registered Nov 8 00:16:03.988247 kernel: io scheduler kyber registered Nov 8 00:16:03.988254 kernel: io scheduler bfq registered Nov 8 00:16:03.988262 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:16:03.988288 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:16:03.988300 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:16:03.988310 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:16:03.988318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:16:03.988326 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:16:03.988335 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:16:03.988343 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:16:03.988350 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:16:03.988512 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 8 00:16:03.988529 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:16:03.988650 kernel: rtc_cmos 00:04: registered as rtc0 Nov 8 00:16:03.988776 kernel: rtc_cmos 00:04: setting system clock to 2025-11-08T00:16:03 UTC (1762560963) Nov 8 00:16:03.988898 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:16:03.988909 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:16:03.988917 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:16:03.988925 kernel: Segment Routing with IPv6 Nov 8 00:16:03.988941 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:16:03.988952 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:16:03.988960 kernel: Key type dns_resolver registered Nov 8 00:16:03.988969 kernel: IPI shorthand broadcast: enabled Nov 8 00:16:03.988977 kernel: sched_clock: Marking stable (1070002398, 189633122)->(1310964616, -51329096) Nov 8 00:16:03.988985 kernel: registered taskstats version 1 Nov 8 00:16:03.988993 kernel: Loading compiled-in X.509 certificates Nov 8 00:16:03.989001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:16:03.989009 kernel: Key type .fscrypt registered Nov 8 00:16:03.989017 kernel: Key type fscrypt-provisioning registered Nov 8 00:16:03.989027 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:16:03.989035 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:16:03.989043 kernel: ima: No architecture policies found Nov 8 00:16:03.989051 kernel: clk: Disabling unused clocks Nov 8 00:16:03.989059 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:16:03.989067 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:16:03.989075 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:16:03.989082 kernel: Run /init as init process Nov 8 00:16:03.989090 kernel: with arguments: Nov 8 00:16:03.989101 kernel: /init Nov 8 00:16:03.989108 kernel: with environment: Nov 8 00:16:03.989116 kernel: HOME=/ Nov 8 00:16:03.989124 kernel: TERM=linux Nov 8 00:16:03.989134 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:16:03.989144 systemd[1]: Detected virtualization kvm. Nov 8 00:16:03.989153 systemd[1]: Detected architecture x86-64. Nov 8 00:16:03.989161 systemd[1]: Running in initrd. Nov 8 00:16:03.989172 systemd[1]: No hostname configured, using default hostname. Nov 8 00:16:03.989180 systemd[1]: Hostname set to . Nov 8 00:16:03.989188 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:16:03.989196 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:16:03.989205 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:16:03.989213 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:16:03.989222 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:16:03.989230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:16:03.989242 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:16:03.989263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:16:03.989344 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:16:03.989355 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:16:03.989367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:16:03.989376 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:16:03.989387 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:16:03.989395 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:16:03.989404 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:16:03.989412 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:16:03.989421 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:16:03.989429 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:16:03.989438 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:16:03.989449 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:16:03.989458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:16:03.989466 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:16:03.989475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:16:03.989484 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:16:03.989492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:16:03.989501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:16:03.989509 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:16:03.989520 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:16:03.989529 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:16:03.989537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:16:03.989545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:16:03.989554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:16:03.989562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:16:03.989571 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:16:03.989604 systemd-journald[193]: Collecting audit messages is disabled. Nov 8 00:16:03.989624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:16:03.989635 systemd-journald[193]: Journal started Nov 8 00:16:03.989653 systemd-journald[193]: Runtime Journal (/run/log/journal/3e56a9893d6e40aebb7488703c6cd1da) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:16:03.979566 systemd-modules-load[194]: Inserted module 'overlay' Nov 8 00:16:04.054365 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:16:04.054392 kernel: Bridge firewalling registered Nov 8 00:16:04.054417 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:16:04.008410 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 8 00:16:04.055292 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:16:04.058147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:16:04.077419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:16:04.078717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:16:04.079824 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:16:04.094083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:16:04.097816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:16:04.102051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:16:04.105545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:16:04.123429 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:16:04.126610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:16:04.130635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:16:04.139719 dracut-cmdline[225]: dracut-dracut-053 Nov 8 00:16:04.143986 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:16:04.144771 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:16:04.164291 systemd-resolved[228]: Positive Trust Anchors: Nov 8 00:16:04.164305 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:16:04.164335 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:16:04.166831 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 8 00:16:04.167980 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:16:04.179631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:16:04.254303 kernel: SCSI subsystem initialized Nov 8 00:16:04.263298 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:16:04.274300 kernel: iscsi: registered transport (tcp) Nov 8 00:16:04.299307 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:16:04.299356 kernel: QLogic iSCSI HBA Driver Nov 8 00:16:04.349609 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:16:04.358478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:16:04.388704 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:16:04.388791 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:16:04.390628 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:16:04.436336 kernel: raid6: avx2x4 gen() 25873 MB/s Nov 8 00:16:04.453329 kernel: raid6: avx2x2 gen() 26610 MB/s Nov 8 00:16:04.471111 kernel: raid6: avx2x1 gen() 25101 MB/s Nov 8 00:16:04.471212 kernel: raid6: using algorithm avx2x2 gen() 26610 MB/s Nov 8 00:16:04.489119 kernel: raid6: .... xor() 19926 MB/s, rmw enabled Nov 8 00:16:04.489196 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:16:04.510315 kernel: xor: automatically using best checksumming function avx Nov 8 00:16:04.665318 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:16:04.681295 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:16:04.694518 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:16:04.706493 systemd-udevd[415]: Using default interface naming scheme 'v255'. Nov 8 00:16:04.711308 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:16:04.717435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:16:04.735303 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Nov 8 00:16:04.772420 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:16:04.784458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:16:04.852759 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:16:04.862479 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:16:04.877527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:16:04.882166 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:16:04.886618 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:16:04.890352 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:16:04.897301 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 8 00:16:04.907670 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 8 00:16:04.903489 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:16:04.913342 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:16:04.913386 kernel: GPT:9289727 != 19775487 Nov 8 00:16:04.913404 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:16:04.913415 kernel: GPT:9289727 != 19775487 Nov 8 00:16:04.913924 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:16:04.915140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:16:04.921624 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:16:04.929311 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:16:04.929340 kernel: libata version 3.00 loaded. Nov 8 00:16:04.932031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:16:04.932149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:16:04.937170 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:16:04.943568 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:16:04.943572 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:16:04.943699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:16:04.948922 kernel: AES CTR mode by8 optimization enabled Nov 8 00:16:04.948367 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:16:04.959918 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:16:04.960171 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Nov 8 00:16:04.960189 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:16:04.969320 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:16:04.969577 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:16:04.977459 kernel: scsi host0: ahci Nov 8 00:16:04.981106 kernel: scsi host1: ahci Nov 8 00:16:04.981352 kernel: scsi host2: ahci Nov 8 00:16:04.981564 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (461) Nov 8 00:16:04.977991 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:16:04.985586 kernel: scsi host3: ahci Nov 8 00:16:04.990304 kernel: scsi host4: ahci Nov 8 00:16:04.997315 kernel: scsi host5: ahci Nov 8 00:16:04.997573 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 8 00:16:04.997586 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 8 00:16:04.997596 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 8 00:16:04.997606 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 8 00:16:05.002473 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 8 00:16:05.002539 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 8 00:16:05.004971 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:16:05.077638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:16:05.087853 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:16:05.095316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:16:05.102753 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:16:05.104992 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:16:05.118523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:16:05.121771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:16:05.129672 disk-uuid[557]: Primary Header is updated. Nov 8 00:16:05.129672 disk-uuid[557]: Secondary Entries is updated. Nov 8 00:16:05.129672 disk-uuid[557]: Secondary Header is updated. Nov 8 00:16:05.136305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:16:05.142296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:16:05.148298 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:16:05.150572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:16:05.312313 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:16:05.332363 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:16:05.332446 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:16:05.332458 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:16:05.334314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:16:05.334332 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:16:05.335315 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:16:05.336861 kernel: ata3.00: applying bridge limits Nov 8 00:16:05.337827 kernel: ata3.00: configured for UDMA/100 Nov 8 00:16:05.338325 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:16:05.403829 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:16:05.404089 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:16:05.417359 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:16:06.141062 disk-uuid[559]: The operation has completed successfully. Nov 8 00:16:06.143241 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:16:06.177255 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:16:06.177439 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:16:06.220465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:16:06.225427 sh[597]: Success Nov 8 00:16:06.240306 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:16:06.278194 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:16:06.294340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:16:06.297761 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:16:06.313586 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:16:06.313657 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:16:06.313682 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:16:06.315247 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:16:06.316464 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:16:06.325529 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:16:06.329084 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:16:06.341480 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:16:06.345781 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:16:06.358490 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:16:06.358521 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:16:06.358535 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:16:06.363294 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:16:06.373162 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:16:06.375912 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:16:06.468398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:16:06.483493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:16:06.507418 systemd-networkd[775]: lo: Link UP Nov 8 00:16:06.507428 systemd-networkd[775]: lo: Gained carrier Nov 8 00:16:06.509110 systemd-networkd[775]: Enumeration completed Nov 8 00:16:06.509239 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:16:06.509533 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:16:06.509537 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:16:06.521894 systemd-networkd[775]: eth0: Link UP Nov 8 00:16:06.521898 systemd-networkd[775]: eth0: Gained carrier Nov 8 00:16:06.521907 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:16:06.523226 systemd[1]: Reached target network.target - Network. Nov 8 00:16:06.548346 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:16:06.757432 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:16:06.763572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:16:06.881047 ignition[780]: Ignition 2.19.0 Nov 8 00:16:06.881061 ignition[780]: Stage: fetch-offline Nov 8 00:16:06.881128 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:06.881141 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:06.881304 ignition[780]: parsed url from cmdline: "" Nov 8 00:16:06.881309 ignition[780]: no config URL provided Nov 8 00:16:06.881316 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:16:06.881329 ignition[780]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:16:06.881363 ignition[780]: op(1): [started] loading QEMU firmware config module Nov 8 00:16:06.881369 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:16:06.899236 ignition[780]: op(1): [finished] loading QEMU firmware config module Nov 8 00:16:06.979476 ignition[780]: parsing config with SHA512: 018ba4e30285d7ca6f45de62d39f7f57a9ed51cd694b16c9b9507a574498a477fe085e5ac3fc4b0b3b9d48d46f57858e7d72105132fdbdbbc9b95b6467b2f111 Nov 8 00:16:06.983115 unknown[780]: fetched base config from "system" Nov 8 00:16:06.983133 unknown[780]: fetched user config from "qemu" Nov 8 00:16:06.983809 ignition[780]: fetch-offline: fetch-offline passed Nov 8 00:16:06.983989 ignition[780]: Ignition finished successfully Nov 8 00:16:06.990423 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:16:06.992763 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:16:07.012484 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:16:07.033578 ignition[789]: Ignition 2.19.0 Nov 8 00:16:07.033590 ignition[789]: Stage: kargs Nov 8 00:16:07.033853 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:07.033871 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:07.035402 ignition[789]: kargs: kargs passed Nov 8 00:16:07.040685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:16:07.035476 ignition[789]: Ignition finished successfully Nov 8 00:16:07.051488 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:16:07.068727 ignition[797]: Ignition 2.19.0 Nov 8 00:16:07.068739 ignition[797]: Stage: disks Nov 8 00:16:07.068934 ignition[797]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:07.068946 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:07.072585 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:16:07.069891 ignition[797]: disks: disks passed Nov 8 00:16:07.074762 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:16:07.069945 ignition[797]: Ignition finished successfully Nov 8 00:16:07.078090 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:16:07.081735 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:16:07.083473 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:16:07.086266 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:16:07.097518 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:16:07.113338 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:16:07.341497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:16:07.355514 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:16:07.516308 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:16:07.516820 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:16:07.518569 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:16:07.530392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:16:07.533143 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:16:07.535841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:16:07.559892 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Nov 8 00:16:07.559934 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:16:07.535903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:16:07.569362 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:16:07.569392 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:16:07.535936 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:16:07.573989 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:16:07.566260 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:16:07.584468 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:16:07.586442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:16:07.624352 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:16:07.630706 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:16:07.636309 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:16:07.686469 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:16:07.778653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:16:07.786438 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:16:07.789904 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:16:07.800628 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:16:07.804147 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:16:07.822693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:16:07.885124 ignition[934]: INFO : Ignition 2.19.0 Nov 8 00:16:07.885124 ignition[934]: INFO : Stage: mount Nov 8 00:16:07.887781 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:07.887781 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:07.887781 ignition[934]: INFO : mount: mount passed Nov 8 00:16:07.887781 ignition[934]: INFO : Ignition finished successfully Nov 8 00:16:07.889410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:16:07.898522 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:16:08.309496 systemd-networkd[775]: eth0: Gained IPv6LL Nov 8 00:16:08.530517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:16:08.539310 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Nov 8 00:16:08.542601 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:16:08.542617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:16:08.542628 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:16:08.547301 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:16:08.549422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:16:08.637846 ignition[960]: INFO : Ignition 2.19.0 Nov 8 00:16:08.637846 ignition[960]: INFO : Stage: files Nov 8 00:16:08.663225 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:08.663225 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:08.667389 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:16:08.670137 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:16:08.670137 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:16:08.675331 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:16:08.677585 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:16:08.677585 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:16:08.676080 unknown[960]: wrote ssh authorized keys file for user: core Nov 8 00:16:08.705228 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:16:08.705228 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:16:08.732470 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:16:08.867764 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:16:08.871145 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:16:09.384310 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:16:10.758991 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:16:10.758991 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:16:10.765601 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:16:10.982826 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:16:10.988607 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:16:10.991399 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:16:10.991399 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:16:10.996047 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:16:10.998417 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:16:11.001299 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:16:11.004073 ignition[960]: INFO : files: files passed Nov 8 00:16:11.005315 ignition[960]: INFO : Ignition finished successfully Nov 8 00:16:11.009479 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:16:11.032515 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:16:11.033947 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:16:11.044332 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:16:11.044495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:16:11.050913 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:16:11.058319 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:16:11.058319 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:16:11.064024 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:16:11.069497 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:16:11.071745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:16:11.091507 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:16:11.130432 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:16:11.130617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:16:11.135291 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:16:11.138071 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:16:11.141449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:16:11.142647 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:16:11.169567 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:16:11.186475 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:16:11.198253 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:16:11.199047 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:16:11.276492 ignition[1014]: INFO : Ignition 2.19.0 Nov 8 00:16:11.276492 ignition[1014]: INFO : Stage: umount Nov 8 00:16:11.276492 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:16:11.276492 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:16:11.276492 ignition[1014]: INFO : umount: umount passed Nov 8 00:16:11.276492 ignition[1014]: INFO : Ignition finished successfully Nov 8 00:16:11.199597 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:16:11.199859 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:16:11.199983 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:16:11.200683 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:16:11.200976 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:16:11.201267 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:16:11.201837 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:16:11.202104 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:16:11.202659 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:16:11.202948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:16:11.203228 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:16:11.203777 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:16:11.204327 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:16:11.204590 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:16:11.204701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:16:11.205230 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:16:11.205768 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:16:11.206018 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:16:11.206169 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:16:11.206313 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:16:11.206423 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:16:11.206856 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:16:11.206974 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:16:11.207215 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:16:11.207681 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:16:11.212384 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:16:11.213145 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:16:11.213931 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:16:11.214745 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:16:11.214877 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:16:11.215421 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:16:11.215559 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:16:11.216195 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:16:11.216392 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:16:11.217029 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:16:11.217188 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:16:11.218680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:16:11.219125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:16:11.219253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:16:11.220540 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:16:11.221121 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:16:11.221262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:16:11.221798 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:16:11.221956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:16:11.227838 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:16:11.227982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:16:11.249622 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:16:11.249794 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:16:11.250455 systemd[1]: Stopped target network.target - Network. Nov 8 00:16:11.250806 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:16:11.250889 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:16:11.251760 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:16:11.251826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:16:11.252317 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:16:11.252387 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:16:11.252878 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:16:11.252950 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:16:11.253954 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:16:11.254121 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:16:11.255964 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:16:11.272565 systemd-networkd[775]: eth0: DHCPv6 lease lost Nov 8 00:16:11.273510 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:16:11.273648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:16:11.278750 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:16:11.278896 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:16:11.281218 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:16:11.281352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:16:11.485248 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Nov 8 00:16:11.285215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:16:11.285292 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:16:11.288256 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:16:11.288443 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:16:11.307509 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:16:11.310762 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:16:11.310857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:16:11.315096 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:16:11.315154 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:16:11.319367 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:16:11.319420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:16:11.321248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:16:11.321323 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:16:11.324994 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:16:11.338622 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:16:11.338766 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:16:11.349697 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:16:11.349905 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:16:11.352710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:16:11.352773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:16:11.355895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:16:11.355940 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:16:11.359957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:16:11.360010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:16:11.363144 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:16:11.363196 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:16:11.366211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:16:11.366263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:16:11.378494 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:16:11.381190 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:16:11.381265 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:16:11.384761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:16:11.384817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:16:11.388515 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:16:11.388629 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:16:11.392259 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:16:11.405402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:16:11.414942 systemd[1]: Switching root. Nov 8 00:16:11.552241 systemd-journald[193]: Journal stopped Nov 8 00:16:13.166924 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:16:13.167003 kernel: SELinux: policy capability open_perms=1 Nov 8 00:16:13.167016 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:16:13.167028 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:16:13.167039 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:16:13.167051 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:16:13.167067 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:16:13.167088 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:16:13.167107 kernel: audit: type=1403 audit(1762560972.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:16:13.167125 systemd[1]: Successfully loaded SELinux policy in 45.349ms. Nov 8 00:16:13.167157 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.694ms. Nov 8 00:16:13.167170 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:16:13.167189 systemd[1]: Detected virtualization kvm. Nov 8 00:16:13.167201 systemd[1]: Detected architecture x86-64. Nov 8 00:16:13.167214 systemd[1]: Detected first boot. Nov 8 00:16:13.167230 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:16:13.167244 zram_generator::config[1058]: No configuration found. Nov 8 00:16:13.167264 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:16:13.167289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:16:13.167302 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:16:13.167317 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:16:13.167331 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:16:13.167343 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:16:13.167359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:16:13.167371 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:16:13.167383 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:16:13.167396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:16:13.167417 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:16:13.167429 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:16:13.167441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:16:13.167454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:16:13.167469 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:16:13.167485 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:16:13.167498 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:16:13.167510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:16:13.167522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:16:13.167535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:16:13.167551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:16:13.167567 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:16:13.167579 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:16:13.167595 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:16:13.167607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:16:13.167619 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:16:13.167632 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:16:13.167643 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:16:13.167656 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:16:13.167675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:16:13.167688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:16:13.167705 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:16:13.167721 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:16:13.167733 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:16:13.167745 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:16:13.167759 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:16:13.167772 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:16:13.167784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:13.167796 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:16:13.167809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:16:13.167824 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:16:13.167837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:16:13.167854 systemd[1]: Reached target machines.target - Containers. Nov 8 00:16:13.167866 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:16:13.167878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:16:13.167891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:16:13.167903 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:16:13.167915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:16:13.167934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:16:13.167946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:16:13.167958 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:16:13.167971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:16:13.167983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:16:13.167995 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:16:13.168008 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:16:13.168023 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:16:13.168037 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:16:13.168053 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:16:13.168067 kernel: loop: module loaded Nov 8 00:16:13.168082 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:16:13.168096 kernel: ACPI: bus type drm_connector registered Nov 8 00:16:13.168107 kernel: fuse: init (API version 7.39) Nov 8 00:16:13.168119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:16:13.168131 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:16:13.168163 systemd-journald[1142]: Collecting audit messages is disabled. Nov 8 00:16:13.168194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:16:13.168207 systemd-journald[1142]: Journal started Nov 8 00:16:13.168228 systemd-journald[1142]: Runtime Journal (/run/log/journal/3e56a9893d6e40aebb7488703c6cd1da) is 6.0M, max 48.4M, 42.3M free. Nov 8 00:16:12.856083 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:16:13.170552 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:16:12.882341 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:16:12.882946 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:16:12.883467 systemd[1]: systemd-journald.service: Consumed 1.105s CPU time. Nov 8 00:16:13.172707 systemd[1]: Stopped verity-setup.service. Nov 8 00:16:13.178332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:13.182587 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:16:13.183356 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:16:13.185228 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:16:13.187202 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:16:13.188977 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:16:13.190933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:16:13.192917 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:16:13.194812 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:16:13.197069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:16:13.199492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:16:13.199683 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:16:13.201954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:16:13.202133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:16:13.204357 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:16:13.204539 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:16:13.206624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:16:13.206814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:16:13.209131 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:16:13.209418 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:16:13.211542 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:16:13.211729 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:16:13.213851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:16:13.216057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:16:13.218517 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:16:13.236526 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:16:13.244368 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:16:13.247460 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:16:13.249397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:16:13.249434 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:16:13.252308 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:16:13.255633 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:16:13.258733 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:16:13.260641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:16:13.264830 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:16:13.270402 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:16:13.272835 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:16:13.274361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:16:13.276599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:16:13.278065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:16:13.287580 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:16:13.288988 systemd-journald[1142]: Time spent on flushing to /var/log/journal/3e56a9893d6e40aebb7488703c6cd1da is 27.985ms for 949 entries. Nov 8 00:16:13.288988 systemd-journald[1142]: System Journal (/var/log/journal/3e56a9893d6e40aebb7488703c6cd1da) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:16:13.348150 systemd-journald[1142]: Received client request to flush runtime journal. Nov 8 00:16:13.348220 kernel: loop0: detected capacity change from 0 to 142488 Nov 8 00:16:13.296026 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:16:13.302233 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:16:13.304605 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:16:13.306836 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:16:13.310311 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:16:13.318968 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:16:13.326403 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:16:13.330162 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:16:13.342131 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:16:13.347856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:16:13.351391 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:16:13.358398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:16:13.369709 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:16:13.381562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:16:13.385463 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:16:13.386258 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:16:13.386353 kernel: loop1: detected capacity change from 0 to 229808 Nov 8 00:16:13.391106 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:16:13.445310 kernel: loop2: detected capacity change from 0 to 140768 Nov 8 00:16:13.451972 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:16:13.451990 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Nov 8 00:16:13.460525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:16:13.508784 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:16:13.528319 kernel: loop4: detected capacity change from 0 to 229808 Nov 8 00:16:13.544296 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:16:13.556627 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 8 00:16:13.557254 (sd-merge)[1197]: Merged extensions into '/usr'. Nov 8 00:16:13.562235 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:16:13.562416 systemd[1]: Reloading... Nov 8 00:16:13.663312 zram_generator::config[1223]: No configuration found. Nov 8 00:16:13.729368 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:16:13.798208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:16:13.847559 systemd[1]: Reloading finished in 284 ms. Nov 8 00:16:13.879879 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:16:13.882361 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:16:13.901535 systemd[1]: Starting ensure-sysext.service... Nov 8 00:16:13.904224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:16:13.909565 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:16:13.909663 systemd[1]: Reloading... Nov 8 00:16:13.946735 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:16:13.947133 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:16:13.948219 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:16:13.949501 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 8 00:16:13.949792 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 8 00:16:13.954229 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:16:13.954429 systemd-tmpfiles[1261]: Skipping /boot Nov 8 00:16:13.980929 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:16:13.981064 systemd-tmpfiles[1261]: Skipping /boot Nov 8 00:16:13.995306 zram_generator::config[1289]: No configuration found. Nov 8 00:16:14.124028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:16:14.179922 systemd[1]: Reloading finished in 269 ms. Nov 8 00:16:14.199897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:16:14.211991 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:16:14.224696 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:16:14.228054 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:16:14.231465 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:16:14.237234 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:16:14.244409 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:16:14.249519 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:16:14.254039 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.254214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:16:14.255522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:16:14.259123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:16:14.262528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:16:14.264439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:16:14.267139 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:16:14.269362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.270736 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:16:14.270978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:16:14.278754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:16:14.278968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:16:14.287811 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:16:14.291358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:16:14.291625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:16:14.301693 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Nov 8 00:16:14.304014 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.304599 augenrules[1355]: No rules Nov 8 00:16:14.304878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:16:14.312561 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:16:14.319681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:16:14.325709 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:16:14.327659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:16:14.331349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:16:14.333146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.334254 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:16:14.336538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:16:14.339679 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:16:14.342183 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:16:14.345789 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:16:14.348767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:16:14.349003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:16:14.353600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:16:14.353825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:16:14.356568 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:16:14.356768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:16:14.359401 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:16:14.382010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.382155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:16:14.389528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:16:14.394506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:16:14.397702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:16:14.400885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:16:14.402922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:16:14.407764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:16:14.412203 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:16:14.412242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:16:14.412318 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1374) Nov 8 00:16:14.413116 systemd[1]: Finished ensure-sysext.service. Nov 8 00:16:14.428625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:16:14.428893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:16:14.431524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:16:14.431725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:16:14.434145 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:16:14.434396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:16:14.442747 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:16:14.443105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:16:14.456463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:16:14.464044 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:16:14.466497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:16:14.470447 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:16:14.478814 systemd-resolved[1330]: Positive Trust Anchors: Nov 8 00:16:14.478825 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:16:14.478858 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:16:14.483772 systemd-resolved[1330]: Defaulting to hostname 'linux'. Nov 8 00:16:14.489851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:16:14.504229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:16:14.539228 systemd-networkd[1401]: lo: Link UP Nov 8 00:16:14.539238 systemd-networkd[1401]: lo: Gained carrier Nov 8 00:16:14.541478 systemd-networkd[1401]: Enumeration completed Nov 8 00:16:14.542080 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:16:14.543427 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:16:14.543513 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:16:14.544499 systemd[1]: Reached target network.target - Network. Nov 8 00:16:14.545134 systemd-networkd[1401]: eth0: Link UP Nov 8 00:16:14.545191 systemd-networkd[1401]: eth0: Gained carrier Nov 8 00:16:14.545242 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:16:14.553534 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:16:14.555920 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:16:14.558304 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:16:14.561666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:16:14.564209 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:16:14.588560 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:16:14.604312 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:16:14.604704 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:16:14.604959 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:16:14.608316 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:16:14.609989 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:16:15.407676 systemd-resolved[1330]: Clock change detected. Flushing caches. Nov 8 00:16:15.407886 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:16:15.407957 systemd-timesyncd[1405]: Initial clock synchronization to Sat 2025-11-08 00:16:15.407610 UTC. Nov 8 00:16:15.417392 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:16:15.421212 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:16:15.530609 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:16:15.570456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:16:15.609703 kernel: kvm_amd: TSC scaling supported Nov 8 00:16:15.609780 kernel: kvm_amd: Nested Virtualization enabled Nov 8 00:16:15.609798 kernel: kvm_amd: Nested Paging enabled Nov 8 00:16:15.611595 kernel: kvm_amd: LBR virtualization supported Nov 8 00:16:15.611622 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 8 00:16:15.613624 kernel: kvm_amd: Virtual GIF supported Nov 8 00:16:15.636619 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:16:15.668308 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:16:15.720149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:16:15.743744 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:16:15.756377 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:16:15.791013 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:16:15.793424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:16:15.795272 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:16:15.797166 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:16:15.799230 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:16:15.801527 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:16:15.803392 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:16:15.805478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:16:15.807726 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:16:15.807758 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:16:15.809257 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:16:15.811765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:16:15.816255 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:16:15.829245 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:16:15.832787 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:16:15.835123 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:16:15.837004 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:16:15.838675 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:16:15.840355 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:16:15.840395 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:16:15.842796 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:16:15.846207 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:16:15.847390 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:16:15.852735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:16:15.857810 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:16:15.859811 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:16:15.862192 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:16:15.865397 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:16:15.869261 jq[1440]: false Nov 8 00:16:15.872740 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:16:15.879848 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:16:15.889803 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:16:15.894392 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:16:15.895171 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:16:15.901763 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:16:15.906805 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:16:15.910244 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:16:15.913003 extend-filesystems[1441]: Found loop3 Nov 8 00:16:15.914350 extend-filesystems[1441]: Found loop4 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found loop5 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found sr0 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda1 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda2 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda3 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found usr Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda4 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda6 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda7 Nov 8 00:16:15.924140 extend-filesystems[1441]: Found vda9 Nov 8 00:16:15.924140 extend-filesystems[1441]: Checking size of /dev/vda9 Nov 8 00:16:15.965662 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1374) Nov 8 00:16:15.914703 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:16:15.919637 dbus-daemon[1439]: [system] SELinux support is enabled Nov 8 00:16:15.966030 extend-filesystems[1441]: Resized partition /dev/vda9 Nov 8 00:16:15.972637 jq[1456]: true Nov 8 00:16:15.914975 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:16:15.973057 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:16:15.975410 update_engine[1455]: I20251108 00:16:15.965960 1455 main.cc:92] Flatcar Update Engine starting Nov 8 00:16:15.975410 update_engine[1455]: I20251108 00:16:15.975157 1455 update_check_scheduler.cc:74] Next update check in 10m10s Nov 8 00:16:15.915333 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:16:15.915550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:16:15.976177 jq[1464]: true Nov 8 00:16:15.925916 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:16:15.939779 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:16:15.940053 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:16:15.958018 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:16:15.963417 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:16:15.963444 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:16:15.966922 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:16:15.966945 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:16:15.988599 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 8 00:16:15.988881 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:16:15.990021 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:16:15.990518 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:16:15.991277 systemd-logind[1450]: New seat seat0. Nov 8 00:16:15.996414 tar[1462]: linux-amd64/LICENSE Nov 8 00:16:15.998410 tar[1462]: linux-amd64/helm Nov 8 00:16:16.016242 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:16:16.018171 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:16:16.187401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 8 00:16:16.338807 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:16:16.353103 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:16:16.353103 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:16:16.353103 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 8 00:16:16.358001 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Nov 8 00:16:16.357627 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:16:16.357872 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:16:16.364104 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:16:16.374563 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:16:16.376736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:16:16.379963 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:16:16.439985 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:16:16.464353 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:16:16.480253 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:16:16.480504 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:16:16.513960 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:16:16.536804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:16:16.546930 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:16:16.550495 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:16:16.552442 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:16:16.644767 containerd[1465]: time="2025-11-08T00:16:16.644595032Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:16:16.782862 systemd-networkd[1401]: eth0: Gained IPv6LL Nov 8 00:16:16.788181 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:16:16.792090 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:16:16.794231 containerd[1465]: time="2025-11-08T00:16:16.793053735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.795730 containerd[1465]: time="2025-11-08T00:16:16.795666305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:16.795730 containerd[1465]: time="2025-11-08T00:16:16.795721358Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:16:16.795802 containerd[1465]: time="2025-11-08T00:16:16.795745984Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:16:16.796023 containerd[1465]: time="2025-11-08T00:16:16.795993128Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:16:16.796023 containerd[1465]: time="2025-11-08T00:16:16.796020118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796148 containerd[1465]: time="2025-11-08T00:16:16.796125726Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796148 containerd[1465]: time="2025-11-08T00:16:16.796144842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796436 containerd[1465]: time="2025-11-08T00:16:16.796409789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796436 containerd[1465]: time="2025-11-08T00:16:16.796431320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796511 containerd[1465]: time="2025-11-08T00:16:16.796445606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796511 containerd[1465]: time="2025-11-08T00:16:16.796456517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796670 containerd[1465]: time="2025-11-08T00:16:16.796643357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.796951 containerd[1465]: time="2025-11-08T00:16:16.796926969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:16:16.797097 containerd[1465]: time="2025-11-08T00:16:16.797074055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:16:16.797097 containerd[1465]: time="2025-11-08T00:16:16.797093572Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:16:16.797214 containerd[1465]: time="2025-11-08T00:16:16.797194340Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:16:16.797279 containerd[1465]: time="2025-11-08T00:16:16.797256898Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:16:16.803040 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:16:16.806296 containerd[1465]: time="2025-11-08T00:16:16.806220788Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:16:16.806296 containerd[1465]: time="2025-11-08T00:16:16.806283967Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:16:16.806461 containerd[1465]: time="2025-11-08T00:16:16.806300868Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:16:16.806461 containerd[1465]: time="2025-11-08T00:16:16.806318672Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:16:16.806461 containerd[1465]: time="2025-11-08T00:16:16.806332628Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:16:16.806534 containerd[1465]: time="2025-11-08T00:16:16.806479213Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:16:16.806863 containerd[1465]: time="2025-11-08T00:16:16.806834629Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:16:16.807036 containerd[1465]: time="2025-11-08T00:16:16.806995621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:16:16.807074 containerd[1465]: time="2025-11-08T00:16:16.807039534Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:16:16.807074 containerd[1465]: time="2025-11-08T00:16:16.807062126Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:16:16.807140 containerd[1465]: time="2025-11-08T00:16:16.807077525Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807140 containerd[1465]: time="2025-11-08T00:16:16.807111409Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807140 containerd[1465]: time="2025-11-08T00:16:16.807130985Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807244 containerd[1465]: time="2025-11-08T00:16:16.807145973Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807244 containerd[1465]: time="2025-11-08T00:16:16.807169357Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807244 containerd[1465]: time="2025-11-08T00:16:16.807191208Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807244 containerd[1465]: time="2025-11-08T00:16:16.807217097Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807244 containerd[1465]: time="2025-11-08T00:16:16.807230081Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807436398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807475632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807510978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807530805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807550722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807606 containerd[1465]: time="2025-11-08T00:16:16.807564388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807619561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807633928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807647624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807662151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807673723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807723476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807736982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807752882Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807807795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807823594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.807918 containerd[1465]: time="2025-11-08T00:16:16.807835767Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.807945843Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808066029Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808082239Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808098169Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808108879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808138234Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808151659Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:16:16.810767 containerd[1465]: time="2025-11-08T00:16:16.808165165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:16:16.808536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.808457954Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.808563422Z" level=info msg="Connect containerd service" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.808857583Z" level=info msg="using legacy CRI server" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.808871860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.809092093Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.809788088Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.809941867Z" level=info msg="Start subscribing containerd event" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.810008662Z" level=info msg="Start recovering state" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.810819242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.810901707Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.811051768Z" level=info msg="Start event monitor" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.811077566Z" level=info msg="Start snapshots syncer" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.811090110Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.811099277Z" level=info msg="Start streaming server" Nov 8 00:16:16.811169 containerd[1465]: time="2025-11-08T00:16:16.811160813Z" level=info msg="containerd successfully booted in 0.167711s" Nov 8 00:16:16.813554 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:16:16.816045 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:16:16.877726 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:16:16.903117 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:16:16.903505 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:16:16.906338 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:16:16.930182 tar[1462]: linux-amd64/README.md Nov 8 00:16:16.972748 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:16:18.181154 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:16:18.184973 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:60662.service - OpenSSH per-connection server daemon (10.0.0.1:60662). Nov 8 00:16:18.191904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:18.194378 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:16:18.198093 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:18.198314 systemd[1]: Startup finished in 1.211s (kernel) + 8.513s (initrd) + 5.173s (userspace) = 14.898s. Nov 8 00:16:18.253264 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 60662 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:18.255730 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:18.265154 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:16:18.295433 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:16:18.299513 systemd-logind[1450]: New session 1 of user core. Nov 8 00:16:18.311021 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:16:18.325875 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:16:18.329073 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:16:18.466038 systemd[1561]: Queued start job for default target default.target. Nov 8 00:16:18.478179 systemd[1561]: Created slice app.slice - User Application Slice. Nov 8 00:16:18.478210 systemd[1561]: Reached target paths.target - Paths. Nov 8 00:16:18.478225 systemd[1561]: Reached target timers.target - Timers. Nov 8 00:16:18.480038 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:16:18.495260 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:16:18.495442 systemd[1561]: Reached target sockets.target - Sockets. Nov 8 00:16:18.495458 systemd[1561]: Reached target basic.target - Basic System. Nov 8 00:16:18.495504 systemd[1561]: Reached target default.target - Main User Target. Nov 8 00:16:18.495546 systemd[1561]: Startup finished in 157ms. Nov 8 00:16:18.496054 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:16:18.497907 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:16:18.561603 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). Nov 8 00:16:18.609526 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:18.611817 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:18.616843 systemd-logind[1450]: New session 2 of user core. Nov 8 00:16:18.625719 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:16:18.683241 sshd[1577]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:18.704309 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:60666.service: Deactivated successfully. Nov 8 00:16:18.706016 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:16:18.708456 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:16:18.713935 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:60680.service - OpenSSH per-connection server daemon (10.0.0.1:60680). Nov 8 00:16:18.714913 systemd-logind[1450]: Removed session 2. Nov 8 00:16:18.803402 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 60680 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:18.806598 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:18.811070 systemd-logind[1450]: New session 3 of user core. Nov 8 00:16:18.815721 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:16:18.865552 kubelet[1553]: E1108 00:16:18.865477 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:18.867344 sshd[1585]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:18.879942 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:18.880131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:18.880414 systemd[1]: kubelet.service: Consumed 1.988s CPU time. Nov 8 00:16:18.880895 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:60680.service: Deactivated successfully. Nov 8 00:16:18.882409 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:16:18.883754 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:16:18.890896 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:60690.service - OpenSSH per-connection server daemon (10.0.0.1:60690). Nov 8 00:16:18.891712 systemd-logind[1450]: Removed session 3. Nov 8 00:16:18.927232 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 60690 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:18.928947 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:18.932790 systemd-logind[1450]: New session 4 of user core. Nov 8 00:16:18.947713 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:16:19.004769 sshd[1594]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:19.014182 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:60690.service: Deactivated successfully. Nov 8 00:16:19.015993 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:16:19.017374 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:16:19.030880 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Nov 8 00:16:19.031876 systemd-logind[1450]: Removed session 4. Nov 8 00:16:19.063652 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:19.065326 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:19.069037 systemd-logind[1450]: New session 5 of user core. Nov 8 00:16:19.078699 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:16:19.139627 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:16:19.140085 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:19.157617 sudo[1604]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:19.160064 sshd[1601]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:19.171980 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:60704.service: Deactivated successfully. Nov 8 00:16:19.174130 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:16:19.176168 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:16:19.181943 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:60718.service - OpenSSH per-connection server daemon (10.0.0.1:60718). Nov 8 00:16:19.183114 systemd-logind[1450]: Removed session 5. Nov 8 00:16:19.223724 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 60718 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:19.225339 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:19.229466 systemd-logind[1450]: New session 6 of user core. Nov 8 00:16:19.238713 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:16:19.297427 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:16:19.297941 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:19.302507 sudo[1613]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:19.309261 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:16:19.309717 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:19.329849 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:16:19.331794 auditctl[1616]: No rules Nov 8 00:16:19.333490 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:16:19.333829 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:16:19.336051 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:16:19.371415 augenrules[1634]: No rules Nov 8 00:16:19.373746 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:16:19.375350 sudo[1612]: pam_unix(sudo:session): session closed for user root Nov 8 00:16:19.377359 sshd[1609]: pam_unix(sshd:session): session closed for user core Nov 8 00:16:19.393837 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:60718.service: Deactivated successfully. Nov 8 00:16:19.395739 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:16:19.397432 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:16:19.404963 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:60732.service - OpenSSH per-connection server daemon (10.0.0.1:60732). Nov 8 00:16:19.405875 systemd-logind[1450]: Removed session 6. Nov 8 00:16:19.440285 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 60732 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:16:19.441762 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:16:19.446779 systemd-logind[1450]: New session 7 of user core. Nov 8 00:16:19.456787 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:16:19.510645 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:16:19.511093 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:16:20.070833 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:16:20.071043 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:16:20.490623 dockerd[1663]: time="2025-11-08T00:16:20.490426609Z" level=info msg="Starting up" Nov 8 00:16:21.773062 dockerd[1663]: time="2025-11-08T00:16:21.772725357Z" level=info msg="Loading containers: start." Nov 8 00:16:21.923604 kernel: Initializing XFRM netlink socket Nov 8 00:16:22.035226 systemd-networkd[1401]: docker0: Link UP Nov 8 00:16:22.060538 dockerd[1663]: time="2025-11-08T00:16:22.060492523Z" level=info msg="Loading containers: done." Nov 8 00:16:22.181879 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck240672089-merged.mount: Deactivated successfully. Nov 8 00:16:22.186846 dockerd[1663]: time="2025-11-08T00:16:22.186788323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:16:22.186956 dockerd[1663]: time="2025-11-08T00:16:22.186929087Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:16:22.187108 dockerd[1663]: time="2025-11-08T00:16:22.187078157Z" level=info msg="Daemon has completed initialization" Nov 8 00:16:22.233679 dockerd[1663]: time="2025-11-08T00:16:22.233587087Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:16:22.233871 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:16:23.216055 containerd[1465]: time="2025-11-08T00:16:23.215980736Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:16:23.985248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390068990.mount: Deactivated successfully. Nov 8 00:16:25.159728 containerd[1465]: time="2025-11-08T00:16:25.159653597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:25.160401 containerd[1465]: time="2025-11-08T00:16:25.160356966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:16:25.161460 containerd[1465]: time="2025-11-08T00:16:25.161423205Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:25.164527 containerd[1465]: time="2025-11-08T00:16:25.164488213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:25.165747 containerd[1465]: time="2025-11-08T00:16:25.165704204Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.949642525s" Nov 8 00:16:25.165791 containerd[1465]: time="2025-11-08T00:16:25.165744910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:16:25.166470 containerd[1465]: time="2025-11-08T00:16:25.166445654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:16:26.616880 containerd[1465]: time="2025-11-08T00:16:26.616806368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:26.617734 containerd[1465]: time="2025-11-08T00:16:26.617661732Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:16:26.618850 containerd[1465]: time="2025-11-08T00:16:26.618816839Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:26.621890 containerd[1465]: time="2025-11-08T00:16:26.621842112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:26.623038 containerd[1465]: time="2025-11-08T00:16:26.623001055Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.456526898s" Nov 8 00:16:26.623082 containerd[1465]: time="2025-11-08T00:16:26.623038856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:16:26.623618 containerd[1465]: time="2025-11-08T00:16:26.623514047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:16:27.981401 containerd[1465]: time="2025-11-08T00:16:27.981307575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:27.982097 containerd[1465]: time="2025-11-08T00:16:27.982023217Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:16:27.983398 containerd[1465]: time="2025-11-08T00:16:27.983345617Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:27.986812 containerd[1465]: time="2025-11-08T00:16:27.986770430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:27.988046 containerd[1465]: time="2025-11-08T00:16:27.988007169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.364461172s" Nov 8 00:16:27.988046 containerd[1465]: time="2025-11-08T00:16:27.988043367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:16:27.988706 containerd[1465]: time="2025-11-08T00:16:27.988674601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:16:29.130529 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:16:29.142729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:29.352333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:29.357145 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:29.624779 kubelet[1886]: E1108 00:16:29.624602 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:29.632187 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:29.632454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:30.052106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585725393.mount: Deactivated successfully. Nov 8 00:16:30.422850 containerd[1465]: time="2025-11-08T00:16:30.422716268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:30.423524 containerd[1465]: time="2025-11-08T00:16:30.423480781Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:16:30.424629 containerd[1465]: time="2025-11-08T00:16:30.424604459Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:30.426492 containerd[1465]: time="2025-11-08T00:16:30.426451813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:30.427054 containerd[1465]: time="2025-11-08T00:16:30.427020169Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.438310814s" Nov 8 00:16:30.427082 containerd[1465]: time="2025-11-08T00:16:30.427053952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:16:30.427501 containerd[1465]: time="2025-11-08T00:16:30.427475403Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:16:31.036207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607845355.mount: Deactivated successfully. Nov 8 00:16:32.219902 containerd[1465]: time="2025-11-08T00:16:32.219838744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.220739 containerd[1465]: time="2025-11-08T00:16:32.220687145Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:16:32.223565 containerd[1465]: time="2025-11-08T00:16:32.223505581Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.227354 containerd[1465]: time="2025-11-08T00:16:32.227316788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.228655 containerd[1465]: time="2025-11-08T00:16:32.228613961Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.801111106s" Nov 8 00:16:32.228720 containerd[1465]: time="2025-11-08T00:16:32.228655729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:16:32.229143 containerd[1465]: time="2025-11-08T00:16:32.229098850Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:16:32.776998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524154241.mount: Deactivated successfully. Nov 8 00:16:32.784260 containerd[1465]: time="2025-11-08T00:16:32.784201646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.784933 containerd[1465]: time="2025-11-08T00:16:32.784853218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:16:32.786055 containerd[1465]: time="2025-11-08T00:16:32.786017451Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.788226 containerd[1465]: time="2025-11-08T00:16:32.788174156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:32.788929 containerd[1465]: time="2025-11-08T00:16:32.788873667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 559.751173ms" Nov 8 00:16:32.788929 containerd[1465]: time="2025-11-08T00:16:32.788924222Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:16:32.789706 containerd[1465]: time="2025-11-08T00:16:32.789673397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:16:33.405362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248983692.mount: Deactivated successfully. Nov 8 00:16:36.402400 containerd[1465]: time="2025-11-08T00:16:36.402313437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:36.403168 containerd[1465]: time="2025-11-08T00:16:36.403090985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:16:36.404583 containerd[1465]: time="2025-11-08T00:16:36.404543870Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:36.408513 containerd[1465]: time="2025-11-08T00:16:36.408489619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:36.410036 containerd[1465]: time="2025-11-08T00:16:36.409966319Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.620199847s" Nov 8 00:16:36.410099 containerd[1465]: time="2025-11-08T00:16:36.410029808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:16:39.882721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:16:39.947908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:40.119762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:40.125620 (kubelet)[2045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:16:40.207187 kubelet[2045]: E1108 00:16:40.207122 2045 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:16:40.212369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:16:40.212623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:16:40.852862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:40.864840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:40.890247 systemd[1]: Reloading requested from client PID 2060 ('systemctl') (unit session-7.scope)... Nov 8 00:16:40.890263 systemd[1]: Reloading... Nov 8 00:16:40.985625 zram_generator::config[2105]: No configuration found. Nov 8 00:16:41.528475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:16:41.613092 systemd[1]: Reloading finished in 722 ms. Nov 8 00:16:41.668268 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:41.672510 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:16:41.672801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:41.674547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:41.850498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:41.855287 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:16:42.057090 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:16:42.057090 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:16:42.057090 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:16:42.057551 kubelet[2149]: I1108 00:16:42.057152 2149 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:16:42.560380 kubelet[2149]: I1108 00:16:42.560332 2149 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:16:42.560380 kubelet[2149]: I1108 00:16:42.560372 2149 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:16:42.560648 kubelet[2149]: I1108 00:16:42.560631 2149 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:16:42.591180 kubelet[2149]: I1108 00:16:42.591120 2149 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:16:42.591332 kubelet[2149]: E1108 00:16:42.591295 2149 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:16:42.604272 kubelet[2149]: E1108 00:16:42.604214 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:16:42.604272 kubelet[2149]: I1108 00:16:42.604253 2149 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:16:42.610168 kubelet[2149]: I1108 00:16:42.610126 2149 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:16:42.610428 kubelet[2149]: I1108 00:16:42.610394 2149 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:16:42.610664 kubelet[2149]: I1108 00:16:42.610418 2149 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:16:42.610765 kubelet[2149]: I1108 00:16:42.610675 2149 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:16:42.610765 kubelet[2149]: I1108 00:16:42.610695 2149 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:16:42.611773 kubelet[2149]: I1108 00:16:42.611744 2149 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:16:42.614865 kubelet[2149]: I1108 00:16:42.614831 2149 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:16:42.614865 kubelet[2149]: I1108 00:16:42.614860 2149 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:16:42.614942 kubelet[2149]: I1108 00:16:42.614904 2149 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:16:42.614942 kubelet[2149]: I1108 00:16:42.614932 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:16:42.623286 kubelet[2149]: I1108 00:16:42.622872 2149 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:16:42.623286 kubelet[2149]: E1108 00:16:42.623235 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:16:42.623286 kubelet[2149]: E1108 00:16:42.623236 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:16:42.623469 kubelet[2149]: I1108 00:16:42.623424 2149 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:16:42.624330 kubelet[2149]: W1108 00:16:42.624131 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:16:42.627638 kubelet[2149]: I1108 00:16:42.627612 2149 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:16:42.627717 kubelet[2149]: I1108 00:16:42.627685 2149 server.go:1289] "Started kubelet" Nov 8 00:16:42.628172 kubelet[2149]: I1108 00:16:42.628114 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:16:42.630673 kubelet[2149]: I1108 00:16:42.629629 2149 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:16:42.630673 kubelet[2149]: I1108 00:16:42.629634 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:16:42.630673 kubelet[2149]: I1108 00:16:42.629847 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:16:42.635640 kubelet[2149]: E1108 00:16:42.632174 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875dfe4b893b7bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:16:42.627643323 +0000 UTC m=+0.768308484,LastTimestamp:2025-11-08 00:16:42.627643323 +0000 UTC m=+0.768308484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:16:42.635640 kubelet[2149]: I1108 00:16:42.634199 2149 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:16:42.635640 kubelet[2149]: I1108 00:16:42.634373 2149 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:16:42.635640 kubelet[2149]: I1108 00:16:42.634524 2149 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:16:42.635640 kubelet[2149]: I1108 00:16:42.634735 2149 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:16:42.635865 kubelet[2149]: E1108 00:16:42.635652 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:16:42.635865 kubelet[2149]: I1108 00:16:42.635837 2149 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:16:42.636254 kubelet[2149]: E1108 00:16:42.635887 2149 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:16:42.637137 kubelet[2149]: I1108 00:16:42.637097 2149 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:16:42.637304 kubelet[2149]: I1108 00:16:42.637243 2149 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:16:42.639508 kubelet[2149]: E1108 00:16:42.639449 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Nov 8 00:16:42.640641 kubelet[2149]: I1108 00:16:42.640376 2149 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:16:42.640721 kubelet[2149]: E1108 00:16:42.640665 2149 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:16:42.662619 kubelet[2149]: I1108 00:16:42.662520 2149 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.663636 2149 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.663653 2149 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.663672 2149 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.663962 2149 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.663993 2149 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.664028 2149 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:16:42.665397 kubelet[2149]: I1108 00:16:42.664052 2149 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:16:42.665397 kubelet[2149]: E1108 00:16:42.664116 2149 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:16:42.665397 kubelet[2149]: E1108 00:16:42.665031 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:16:42.737081 kubelet[2149]: E1108 00:16:42.737047 2149 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:16:42.764190 kubelet[2149]: E1108 00:16:42.764155 2149 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:16:42.817134 kubelet[2149]: I1108 00:16:42.817041 2149 policy_none.go:49] "None policy: Start" Nov 8 00:16:42.817134 kubelet[2149]: I1108 00:16:42.817076 2149 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:16:42.817134 kubelet[2149]: I1108 00:16:42.817099 2149 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:16:42.825548 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:16:42.837658 kubelet[2149]: E1108 00:16:42.837626 2149 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:16:42.840333 kubelet[2149]: E1108 00:16:42.840270 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Nov 8 00:16:42.847387 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:16:42.851443 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:16:42.861654 kubelet[2149]: E1108 00:16:42.861610 2149 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:16:42.861892 kubelet[2149]: I1108 00:16:42.861866 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:16:42.861928 kubelet[2149]: I1108 00:16:42.861886 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:16:42.862333 kubelet[2149]: I1108 00:16:42.862260 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:16:42.863191 kubelet[2149]: E1108 00:16:42.863152 2149 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:16:42.863252 kubelet[2149]: E1108 00:16:42.863224 2149 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:16:42.963633 kubelet[2149]: I1108 00:16:42.963530 2149 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:42.964011 kubelet[2149]: E1108 00:16:42.963970 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 8 00:16:42.976940 systemd[1]: Created slice kubepods-burstable-pod657604beb77144a40573ee368e9d2ab2.slice - libcontainer container kubepods-burstable-pod657604beb77144a40573ee368e9d2ab2.slice. Nov 8 00:16:42.992252 kubelet[2149]: E1108 00:16:42.992213 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:42.996080 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 8 00:16:42.997853 kubelet[2149]: E1108 00:16:42.997817 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:42.999843 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 8 00:16:43.001448 kubelet[2149]: E1108 00:16:43.001413 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:43.036684 kubelet[2149]: I1108 00:16:43.036641 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:43.036684 kubelet[2149]: I1108 00:16:43.036676 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:43.036790 kubelet[2149]: I1108 00:16:43.036708 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:43.036790 kubelet[2149]: I1108 00:16:43.036726 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:43.036790 kubelet[2149]: I1108 00:16:43.036744 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:43.036790 kubelet[2149]: I1108 00:16:43.036763 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:43.036790 kubelet[2149]: I1108 00:16:43.036778 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:43.036935 kubelet[2149]: I1108 00:16:43.036794 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:43.036935 kubelet[2149]: I1108 00:16:43.036811 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:43.165899 kubelet[2149]: I1108 00:16:43.165797 2149 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:43.166239 kubelet[2149]: E1108 00:16:43.166079 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 8 00:16:43.241056 kubelet[2149]: E1108 00:16:43.241012 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Nov 8 00:16:43.293291 kubelet[2149]: E1108 00:16:43.293263 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:43.294094 containerd[1465]: time="2025-11-08T00:16:43.294044412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:657604beb77144a40573ee368e9d2ab2,Namespace:kube-system,Attempt:0,}" Nov 8 00:16:43.298419 kubelet[2149]: E1108 00:16:43.298379 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:43.298911 containerd[1465]: time="2025-11-08T00:16:43.298886633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 8 00:16:43.302147 kubelet[2149]: E1108 00:16:43.302122 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:43.302418 containerd[1465]: time="2025-11-08T00:16:43.302381297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 8 00:16:43.568404 kubelet[2149]: I1108 00:16:43.568376 2149 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:43.568858 kubelet[2149]: E1108 00:16:43.568817 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 8 00:16:43.590407 kubelet[2149]: E1108 00:16:43.590381 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:16:43.635614 kubelet[2149]: E1108 00:16:43.635565 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:16:43.661511 kubelet[2149]: E1108 00:16:43.661459 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:16:43.808683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount724133167.mount: Deactivated successfully. Nov 8 00:16:43.816689 containerd[1465]: time="2025-11-08T00:16:43.816622443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:16:43.817725 containerd[1465]: time="2025-11-08T00:16:43.817651823Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:16:43.818721 containerd[1465]: time="2025-11-08T00:16:43.818539829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:16:43.819766 containerd[1465]: time="2025-11-08T00:16:43.819720242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:16:43.820589 containerd[1465]: time="2025-11-08T00:16:43.820531764Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:16:43.821482 containerd[1465]: time="2025-11-08T00:16:43.821452591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:16:43.822497 containerd[1465]: time="2025-11-08T00:16:43.822454610Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:16:43.826800 containerd[1465]: time="2025-11-08T00:16:43.826754193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:16:43.827652 containerd[1465]: time="2025-11-08T00:16:43.827607434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.17985ms" Nov 8 00:16:43.829015 containerd[1465]: time="2025-11-08T00:16:43.828942597Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.803898ms" Nov 8 00:16:43.830280 containerd[1465]: time="2025-11-08T00:16:43.830251863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.314755ms" Nov 8 00:16:43.912217 kubelet[2149]: E1108 00:16:43.912171 2149 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:16:44.042964 kubelet[2149]: E1108 00:16:44.042897 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Nov 8 00:16:44.061353 containerd[1465]: time="2025-11-08T00:16:44.061137205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:16:44.061353 containerd[1465]: time="2025-11-08T00:16:44.061201706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:16:44.061353 containerd[1465]: time="2025-11-08T00:16:44.061213889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.061353 containerd[1465]: time="2025-11-08T00:16:44.061303808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.062188 containerd[1465]: time="2025-11-08T00:16:44.062115139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:16:44.063111 containerd[1465]: time="2025-11-08T00:16:44.062218313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:16:44.063111 containerd[1465]: time="2025-11-08T00:16:44.062234373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.063111 containerd[1465]: time="2025-11-08T00:16:44.062358446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.067496 containerd[1465]: time="2025-11-08T00:16:44.067416331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:16:44.067496 containerd[1465]: time="2025-11-08T00:16:44.067461916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:16:44.067496 containerd[1465]: time="2025-11-08T00:16:44.067472877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.067758 containerd[1465]: time="2025-11-08T00:16:44.067613400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:44.091808 systemd[1]: Started cri-containerd-6c0a7d6732210104a709575d2468fdf642de1a7efa3bd42c359368492ef85ff4.scope - libcontainer container 6c0a7d6732210104a709575d2468fdf642de1a7efa3bd42c359368492ef85ff4. Nov 8 00:16:44.093591 systemd[1]: Started cri-containerd-a87b1b430588e92e4c0ac5653a4930feae007006be2e82c4cbe959685a31d1b5.scope - libcontainer container a87b1b430588e92e4c0ac5653a4930feae007006be2e82c4cbe959685a31d1b5. Nov 8 00:16:44.097778 systemd[1]: Started cri-containerd-2991151837612277ad517ef1945e14ac087e94e01519f38bb31d4d23500b8a54.scope - libcontainer container 2991151837612277ad517ef1945e14ac087e94e01519f38bb31d4d23500b8a54. Nov 8 00:16:44.186391 containerd[1465]: time="2025-11-08T00:16:44.186343792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2991151837612277ad517ef1945e14ac087e94e01519f38bb31d4d23500b8a54\"" Nov 8 00:16:44.187311 kubelet[2149]: E1108 00:16:44.187283 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:44.192762 containerd[1465]: time="2025-11-08T00:16:44.192620674Z" level=info msg="CreateContainer within sandbox \"2991151837612277ad517ef1945e14ac087e94e01519f38bb31d4d23500b8a54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:16:44.195672 containerd[1465]: time="2025-11-08T00:16:44.195598127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:657604beb77144a40573ee368e9d2ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a87b1b430588e92e4c0ac5653a4930feae007006be2e82c4cbe959685a31d1b5\"" Nov 8 00:16:44.197788 kubelet[2149]: E1108 00:16:44.197751 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:44.203615 containerd[1465]: time="2025-11-08T00:16:44.202108437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0a7d6732210104a709575d2468fdf642de1a7efa3bd42c359368492ef85ff4\"" Nov 8 00:16:44.203764 kubelet[2149]: E1108 00:16:44.203740 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:44.204949 containerd[1465]: time="2025-11-08T00:16:44.204920470Z" level=info msg="CreateContainer within sandbox \"a87b1b430588e92e4c0ac5653a4930feae007006be2e82c4cbe959685a31d1b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:16:44.210303 containerd[1465]: time="2025-11-08T00:16:44.210256627Z" level=info msg="CreateContainer within sandbox \"6c0a7d6732210104a709575d2468fdf642de1a7efa3bd42c359368492ef85ff4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:16:44.227193 containerd[1465]: time="2025-11-08T00:16:44.227144919Z" level=info msg="CreateContainer within sandbox \"2991151837612277ad517ef1945e14ac087e94e01519f38bb31d4d23500b8a54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a3717ad0b5bae65c6189996cc2630ceb54947b4f59827a84a80f2daf64f0050\"" Nov 8 00:16:44.227810 containerd[1465]: time="2025-11-08T00:16:44.227778427Z" level=info msg="StartContainer for \"6a3717ad0b5bae65c6189996cc2630ceb54947b4f59827a84a80f2daf64f0050\"" Nov 8 00:16:44.231882 containerd[1465]: time="2025-11-08T00:16:44.231839232Z" level=info msg="CreateContainer within sandbox \"a87b1b430588e92e4c0ac5653a4930feae007006be2e82c4cbe959685a31d1b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b174c1a647a5b6794d2ee2c4754da80d8626cf1c140a7ae93f5a6543d17b0ce\"" Nov 8 00:16:44.232271 containerd[1465]: time="2025-11-08T00:16:44.232235115Z" level=info msg="StartContainer for \"5b174c1a647a5b6794d2ee2c4754da80d8626cf1c140a7ae93f5a6543d17b0ce\"" Nov 8 00:16:44.235800 containerd[1465]: time="2025-11-08T00:16:44.235749967Z" level=info msg="CreateContainer within sandbox \"6c0a7d6732210104a709575d2468fdf642de1a7efa3bd42c359368492ef85ff4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53e2321d61389cd3754d6eb1c3a709a7ce2b90922fcd4fa0436d0e7a959eb3f3\"" Nov 8 00:16:44.237083 containerd[1465]: time="2025-11-08T00:16:44.237028073Z" level=info msg="StartContainer for \"53e2321d61389cd3754d6eb1c3a709a7ce2b90922fcd4fa0436d0e7a959eb3f3\"" Nov 8 00:16:44.263770 systemd[1]: Started cri-containerd-6a3717ad0b5bae65c6189996cc2630ceb54947b4f59827a84a80f2daf64f0050.scope - libcontainer container 6a3717ad0b5bae65c6189996cc2630ceb54947b4f59827a84a80f2daf64f0050. Nov 8 00:16:44.277311 systemd[1]: Started cri-containerd-5b174c1a647a5b6794d2ee2c4754da80d8626cf1c140a7ae93f5a6543d17b0ce.scope - libcontainer container 5b174c1a647a5b6794d2ee2c4754da80d8626cf1c140a7ae93f5a6543d17b0ce. Nov 8 00:16:44.292755 systemd[1]: Started cri-containerd-53e2321d61389cd3754d6eb1c3a709a7ce2b90922fcd4fa0436d0e7a959eb3f3.scope - libcontainer container 53e2321d61389cd3754d6eb1c3a709a7ce2b90922fcd4fa0436d0e7a959eb3f3. Nov 8 00:16:44.331468 containerd[1465]: time="2025-11-08T00:16:44.331340523Z" level=info msg="StartContainer for \"5b174c1a647a5b6794d2ee2c4754da80d8626cf1c140a7ae93f5a6543d17b0ce\" returns successfully" Nov 8 00:16:44.342957 containerd[1465]: time="2025-11-08T00:16:44.342836362Z" level=info msg="StartContainer for \"6a3717ad0b5bae65c6189996cc2630ceb54947b4f59827a84a80f2daf64f0050\" returns successfully" Nov 8 00:16:44.353298 containerd[1465]: time="2025-11-08T00:16:44.353230244Z" level=info msg="StartContainer for \"53e2321d61389cd3754d6eb1c3a709a7ce2b90922fcd4fa0436d0e7a959eb3f3\" returns successfully" Nov 8 00:16:44.371123 kubelet[2149]: I1108 00:16:44.371072 2149 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:44.371969 kubelet[2149]: E1108 00:16:44.371524 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Nov 8 00:16:44.671602 kubelet[2149]: E1108 00:16:44.671280 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:44.671602 kubelet[2149]: E1108 00:16:44.671418 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:44.677023 kubelet[2149]: E1108 00:16:44.676817 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:44.677023 kubelet[2149]: E1108 00:16:44.676918 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:44.678781 kubelet[2149]: E1108 00:16:44.678581 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:44.678975 kubelet[2149]: E1108 00:16:44.678930 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:45.681335 kubelet[2149]: E1108 00:16:45.681280 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:45.681952 kubelet[2149]: E1108 00:16:45.681426 2149 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:16:45.681952 kubelet[2149]: E1108 00:16:45.681438 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:45.681952 kubelet[2149]: E1108 00:16:45.681600 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:45.973361 kubelet[2149]: I1108 00:16:45.973300 2149 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:46.236262 kubelet[2149]: E1108 00:16:46.236112 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:16:46.300510 kubelet[2149]: I1108 00:16:46.300457 2149 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:16:46.337303 kubelet[2149]: I1108 00:16:46.337227 2149 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:46.388231 kubelet[2149]: E1108 00:16:46.388173 2149 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:46.388231 kubelet[2149]: I1108 00:16:46.388206 2149 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:46.390157 kubelet[2149]: E1108 00:16:46.390135 2149 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:46.390157 kubelet[2149]: I1108 00:16:46.390154 2149 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:46.392122 kubelet[2149]: E1108 00:16:46.392028 2149 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:46.624665 kubelet[2149]: I1108 00:16:46.624502 2149 apiserver.go:52] "Watching apiserver" Nov 8 00:16:46.634982 kubelet[2149]: I1108 00:16:46.634946 2149 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:16:48.267561 kubelet[2149]: I1108 00:16:48.267508 2149 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:48.275994 kubelet[2149]: E1108 00:16:48.275956 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:48.569253 systemd[1]: Reloading requested from client PID 2438 ('systemctl') (unit session-7.scope)... Nov 8 00:16:48.569268 systemd[1]: Reloading... Nov 8 00:16:48.680617 zram_generator::config[2477]: No configuration found. Nov 8 00:16:48.686478 kubelet[2149]: E1108 00:16:48.686446 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:48.871257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:16:48.967100 systemd[1]: Reloading finished in 397 ms. Nov 8 00:16:49.014136 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:49.032068 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:16:49.032362 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:49.032418 systemd[1]: kubelet.service: Consumed 1.260s CPU time, 135.2M memory peak, 0B memory swap peak. Nov 8 00:16:49.037974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:16:49.219770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:16:49.224713 (kubelet)[2522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:16:49.292590 kubelet[2522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:16:49.292590 kubelet[2522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:16:49.292590 kubelet[2522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:16:49.293050 kubelet[2522]: I1108 00:16:49.292682 2522 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:16:49.301849 kubelet[2522]: I1108 00:16:49.301773 2522 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:16:49.301849 kubelet[2522]: I1108 00:16:49.301829 2522 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:16:49.302112 kubelet[2522]: I1108 00:16:49.302085 2522 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:16:49.303330 kubelet[2522]: I1108 00:16:49.303299 2522 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:16:49.306529 kubelet[2522]: I1108 00:16:49.306484 2522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:16:49.309910 kubelet[2522]: E1108 00:16:49.309857 2522 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:16:49.309910 kubelet[2522]: I1108 00:16:49.309908 2522 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:16:49.316776 kubelet[2522]: I1108 00:16:49.316743 2522 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:16:49.317085 kubelet[2522]: I1108 00:16:49.317052 2522 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:16:49.317242 kubelet[2522]: I1108 00:16:49.317080 2522 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:16:49.317338 kubelet[2522]: I1108 00:16:49.317246 2522 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:16:49.317338 kubelet[2522]: I1108 00:16:49.317258 2522 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:16:49.317338 kubelet[2522]: I1108 00:16:49.317329 2522 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:16:49.317549 kubelet[2522]: I1108 00:16:49.317530 2522 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:16:49.317549 kubelet[2522]: I1108 00:16:49.317547 2522 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:16:49.317621 kubelet[2522]: I1108 00:16:49.317592 2522 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:16:49.317621 kubelet[2522]: I1108 00:16:49.317616 2522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:16:49.322176 kubelet[2522]: I1108 00:16:49.322130 2522 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:16:49.323641 kubelet[2522]: I1108 00:16:49.323015 2522 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:16:49.329638 kubelet[2522]: I1108 00:16:49.329422 2522 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:16:49.329638 kubelet[2522]: I1108 00:16:49.329507 2522 server.go:1289] "Started kubelet" Nov 8 00:16:49.329828 kubelet[2522]: I1108 00:16:49.329681 2522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:16:49.331396 kubelet[2522]: I1108 00:16:49.330530 2522 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:16:49.334960 kubelet[2522]: I1108 00:16:49.334901 2522 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:16:49.339644 kubelet[2522]: I1108 00:16:49.338444 2522 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:16:49.339644 kubelet[2522]: I1108 00:16:49.338706 2522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:16:49.339644 kubelet[2522]: I1108 00:16:49.339114 2522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:16:49.340369 kubelet[2522]: I1108 00:16:49.340007 2522 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:16:49.340369 kubelet[2522]: I1108 00:16:49.340127 2522 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:16:49.340369 kubelet[2522]: I1108 00:16:49.340307 2522 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:16:49.341324 kubelet[2522]: I1108 00:16:49.341293 2522 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:16:49.341441 kubelet[2522]: I1108 00:16:49.341408 2522 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:16:49.342461 kubelet[2522]: E1108 00:16:49.342421 2522 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:16:49.343501 kubelet[2522]: I1108 00:16:49.343475 2522 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:16:49.360908 kubelet[2522]: I1108 00:16:49.360709 2522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:16:49.362118 kubelet[2522]: I1108 00:16:49.362102 2522 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:16:49.362378 kubelet[2522]: I1108 00:16:49.362364 2522 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:16:49.362562 kubelet[2522]: I1108 00:16:49.362550 2522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:16:49.363002 kubelet[2522]: I1108 00:16:49.362989 2522 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:16:49.363133 kubelet[2522]: E1108 00:16:49.363093 2522 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:16:49.389188 kubelet[2522]: I1108 00:16:49.389153 2522 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:16:49.389188 kubelet[2522]: I1108 00:16:49.389172 2522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:16:49.389188 kubelet[2522]: I1108 00:16:49.389191 2522 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:16:49.389390 kubelet[2522]: I1108 00:16:49.389316 2522 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:16:49.389390 kubelet[2522]: I1108 00:16:49.389330 2522 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:16:49.389390 kubelet[2522]: I1108 00:16:49.389345 2522 policy_none.go:49] "None policy: Start" Nov 8 00:16:49.389390 kubelet[2522]: I1108 00:16:49.389354 2522 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:16:49.389390 kubelet[2522]: I1108 00:16:49.389364 2522 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:16:49.389505 kubelet[2522]: I1108 00:16:49.389448 2522 state_mem.go:75] "Updated machine memory state" Nov 8 00:16:49.393266 kubelet[2522]: E1108 00:16:49.393242 2522 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:16:49.393469 kubelet[2522]: I1108 00:16:49.393424 2522 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:16:49.393469 kubelet[2522]: I1108 00:16:49.393441 2522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:16:49.393701 kubelet[2522]: I1108 00:16:49.393679 2522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:16:49.397608 kubelet[2522]: E1108 00:16:49.396111 2522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:16:49.464924 kubelet[2522]: I1108 00:16:49.464865 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:49.465122 kubelet[2522]: I1108 00:16:49.464979 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:49.465122 kubelet[2522]: I1108 00:16:49.465012 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.500502 kubelet[2522]: I1108 00:16:49.500388 2522 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:16:49.541299 kubelet[2522]: I1108 00:16:49.541239 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.541299 kubelet[2522]: I1108 00:16:49.541280 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.541299 kubelet[2522]: I1108 00:16:49.541308 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.541519 kubelet[2522]: I1108 00:16:49.541361 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:49.541519 kubelet[2522]: I1108 00:16:49.541393 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:49.541519 kubelet[2522]: I1108 00:16:49.541426 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/657604beb77144a40573ee368e9d2ab2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"657604beb77144a40573ee368e9d2ab2\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:49.541519 kubelet[2522]: I1108 00:16:49.541445 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.541519 kubelet[2522]: I1108 00:16:49.541461 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:16:49.541664 kubelet[2522]: I1108 00:16:49.541477 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:49.621326 kubelet[2522]: E1108 00:16:49.621229 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:49.626328 kubelet[2522]: I1108 00:16:49.626269 2522 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:16:49.626487 kubelet[2522]: I1108 00:16:49.626380 2522 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:16:49.908621 kubelet[2522]: E1108 00:16:49.908376 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:49.908621 kubelet[2522]: E1108 00:16:49.908439 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:49.921718 kubelet[2522]: E1108 00:16:49.921679 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:50.320118 kubelet[2522]: I1108 00:16:50.320066 2522 apiserver.go:52] "Watching apiserver" Nov 8 00:16:50.340816 kubelet[2522]: I1108 00:16:50.340750 2522 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:16:50.376904 kubelet[2522]: I1108 00:16:50.376857 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:50.377387 kubelet[2522]: I1108 00:16:50.377334 2522 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:50.377702 kubelet[2522]: E1108 00:16:50.377681 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:50.384028 kubelet[2522]: E1108 00:16:50.383978 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:16:50.384213 kubelet[2522]: E1108 00:16:50.384185 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:50.384909 kubelet[2522]: E1108 00:16:50.384885 2522 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:16:50.385071 kubelet[2522]: E1108 00:16:50.385046 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:50.402110 kubelet[2522]: I1108 00:16:50.402015 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.401982238 podStartE2EDuration="2.401982238s" podCreationTimestamp="2025-11-08 00:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:16:50.394131254 +0000 UTC m=+1.164822076" watchObservedRunningTime="2025-11-08 00:16:50.401982238 +0000 UTC m=+1.172673060" Nov 8 00:16:50.402459 kubelet[2522]: I1108 00:16:50.402194 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.402188646 podStartE2EDuration="1.402188646s" podCreationTimestamp="2025-11-08 00:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:16:50.401881128 +0000 UTC m=+1.172571970" watchObservedRunningTime="2025-11-08 00:16:50.402188646 +0000 UTC m=+1.172879468" Nov 8 00:16:50.410267 kubelet[2522]: I1108 00:16:50.410188 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.410170074 podStartE2EDuration="1.410170074s" podCreationTimestamp="2025-11-08 00:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:16:50.409895257 +0000 UTC m=+1.180586090" watchObservedRunningTime="2025-11-08 00:16:50.410170074 +0000 UTC m=+1.180860906" Nov 8 00:16:51.378083 kubelet[2522]: E1108 00:16:51.378043 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:51.378620 kubelet[2522]: E1108 00:16:51.378189 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:52.449902 kubelet[2522]: E1108 00:16:52.449845 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:54.343672 kubelet[2522]: I1108 00:16:54.343626 2522 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:16:54.344164 containerd[1465]: time="2025-11-08T00:16:54.344003987Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:16:54.344419 kubelet[2522]: I1108 00:16:54.344191 2522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:16:54.914847 kubelet[2522]: E1108 00:16:54.914803 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:55.149559 systemd[1]: Created slice kubepods-besteffort-podfb79ed1a_6b94_49dc_9b22_86ec0a7eec72.slice - libcontainer container kubepods-besteffort-podfb79ed1a_6b94_49dc_9b22_86ec0a7eec72.slice. Nov 8 00:16:55.173032 kubelet[2522]: I1108 00:16:55.172855 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-xtables-lock\") pod \"kube-proxy-ckqfd\" (UID: \"fb79ed1a-6b94-49dc-9b22-86ec0a7eec72\") " pod="kube-system/kube-proxy-ckqfd" Nov 8 00:16:55.173032 kubelet[2522]: I1108 00:16:55.172911 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-lib-modules\") pod \"kube-proxy-ckqfd\" (UID: \"fb79ed1a-6b94-49dc-9b22-86ec0a7eec72\") " pod="kube-system/kube-proxy-ckqfd" Nov 8 00:16:55.173032 kubelet[2522]: I1108 00:16:55.172936 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-kube-proxy\") pod \"kube-proxy-ckqfd\" (UID: \"fb79ed1a-6b94-49dc-9b22-86ec0a7eec72\") " pod="kube-system/kube-proxy-ckqfd" Nov 8 00:16:55.173032 kubelet[2522]: I1108 00:16:55.172954 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzcr\" (UniqueName: \"kubernetes.io/projected/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-kube-api-access-hmzcr\") pod \"kube-proxy-ckqfd\" (UID: \"fb79ed1a-6b94-49dc-9b22-86ec0a7eec72\") " pod="kube-system/kube-proxy-ckqfd" Nov 8 00:16:55.278379 kubelet[2522]: E1108 00:16:55.278309 2522 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:16:55.278379 kubelet[2522]: E1108 00:16:55.278357 2522 projected.go:194] Error preparing data for projected volume kube-api-access-hmzcr for pod kube-system/kube-proxy-ckqfd: configmap "kube-root-ca.crt" not found Nov 8 00:16:55.278600 kubelet[2522]: E1108 00:16:55.278456 2522 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-kube-api-access-hmzcr podName:fb79ed1a-6b94-49dc-9b22-86ec0a7eec72 nodeName:}" failed. No retries permitted until 2025-11-08 00:16:55.778430398 +0000 UTC m=+6.549121220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hmzcr" (UniqueName: "kubernetes.io/projected/fb79ed1a-6b94-49dc-9b22-86ec0a7eec72-kube-api-access-hmzcr") pod "kube-proxy-ckqfd" (UID: "fb79ed1a-6b94-49dc-9b22-86ec0a7eec72") : configmap "kube-root-ca.crt" not found Nov 8 00:16:55.384058 kubelet[2522]: E1108 00:16:55.383999 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:55.612840 kubelet[2522]: E1108 00:16:55.612247 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:55.623211 systemd[1]: Created slice kubepods-besteffort-pod13b014e9_f972_42a5_85bc_ce8f75ee5e7f.slice - libcontainer container kubepods-besteffort-pod13b014e9_f972_42a5_85bc_ce8f75ee5e7f.slice. Nov 8 00:16:55.677001 kubelet[2522]: I1108 00:16:55.676931 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtgwp\" (UniqueName: \"kubernetes.io/projected/13b014e9-f972-42a5-85bc-ce8f75ee5e7f-kube-api-access-rtgwp\") pod \"tigera-operator-7dcd859c48-9hhdv\" (UID: \"13b014e9-f972-42a5-85bc-ce8f75ee5e7f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9hhdv" Nov 8 00:16:55.677001 kubelet[2522]: I1108 00:16:55.676982 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13b014e9-f972-42a5-85bc-ce8f75ee5e7f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9hhdv\" (UID: \"13b014e9-f972-42a5-85bc-ce8f75ee5e7f\") " pod="tigera-operator/tigera-operator-7dcd859c48-9hhdv" Nov 8 00:16:55.930363 containerd[1465]: time="2025-11-08T00:16:55.930210649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9hhdv,Uid:13b014e9-f972-42a5-85bc-ce8f75ee5e7f,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:16:55.963947 containerd[1465]: time="2025-11-08T00:16:55.963790805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:16:55.963947 containerd[1465]: time="2025-11-08T00:16:55.963874162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:16:55.963947 containerd[1465]: time="2025-11-08T00:16:55.963896594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:55.964225 containerd[1465]: time="2025-11-08T00:16:55.964013924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:55.994756 systemd[1]: Started cri-containerd-6d28316397d24a279682fd7bfdbd2bc6046f93d5e64bc11fd75f43c240f90c9b.scope - libcontainer container 6d28316397d24a279682fd7bfdbd2bc6046f93d5e64bc11fd75f43c240f90c9b. Nov 8 00:16:56.037214 containerd[1465]: time="2025-11-08T00:16:56.037149890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9hhdv,Uid:13b014e9-f972-42a5-85bc-ce8f75ee5e7f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6d28316397d24a279682fd7bfdbd2bc6046f93d5e64bc11fd75f43c240f90c9b\"" Nov 8 00:16:56.039489 containerd[1465]: time="2025-11-08T00:16:56.039221248Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:16:56.058782 kubelet[2522]: E1108 00:16:56.058737 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:56.059226 containerd[1465]: time="2025-11-08T00:16:56.059190180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ckqfd,Uid:fb79ed1a-6b94-49dc-9b22-86ec0a7eec72,Namespace:kube-system,Attempt:0,}" Nov 8 00:16:56.085789 containerd[1465]: time="2025-11-08T00:16:56.085663974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:16:56.085789 containerd[1465]: time="2025-11-08T00:16:56.085736500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:16:56.085789 containerd[1465]: time="2025-11-08T00:16:56.085748763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:56.086505 containerd[1465]: time="2025-11-08T00:16:56.086433879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:16:56.106763 systemd[1]: Started cri-containerd-13935a34dcf7643650bf777c89fac0e2a2a32ecf06b659c45e3a1ba10094c73f.scope - libcontainer container 13935a34dcf7643650bf777c89fac0e2a2a32ecf06b659c45e3a1ba10094c73f. Nov 8 00:16:56.133684 containerd[1465]: time="2025-11-08T00:16:56.133618418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ckqfd,Uid:fb79ed1a-6b94-49dc-9b22-86ec0a7eec72,Namespace:kube-system,Attempt:0,} returns sandbox id \"13935a34dcf7643650bf777c89fac0e2a2a32ecf06b659c45e3a1ba10094c73f\"" Nov 8 00:16:56.134539 kubelet[2522]: E1108 00:16:56.134489 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:56.139732 containerd[1465]: time="2025-11-08T00:16:56.139685226Z" level=info msg="CreateContainer within sandbox \"13935a34dcf7643650bf777c89fac0e2a2a32ecf06b659c45e3a1ba10094c73f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:16:56.160914 containerd[1465]: time="2025-11-08T00:16:56.160834815Z" level=info msg="CreateContainer within sandbox \"13935a34dcf7643650bf777c89fac0e2a2a32ecf06b659c45e3a1ba10094c73f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c518bd6ee181ba79befbc53db0f5a27910c3e7e59a0030ccb6726c0b63133bb\"" Nov 8 00:16:56.161456 containerd[1465]: time="2025-11-08T00:16:56.161424031Z" level=info msg="StartContainer for \"9c518bd6ee181ba79befbc53db0f5a27910c3e7e59a0030ccb6726c0b63133bb\"" Nov 8 00:16:56.196827 systemd[1]: Started cri-containerd-9c518bd6ee181ba79befbc53db0f5a27910c3e7e59a0030ccb6726c0b63133bb.scope - libcontainer container 9c518bd6ee181ba79befbc53db0f5a27910c3e7e59a0030ccb6726c0b63133bb. Nov 8 00:16:56.234791 containerd[1465]: time="2025-11-08T00:16:56.234732827Z" level=info msg="StartContainer for \"9c518bd6ee181ba79befbc53db0f5a27910c3e7e59a0030ccb6726c0b63133bb\" returns successfully" Nov 8 00:16:56.390733 kubelet[2522]: E1108 00:16:56.390463 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:56.396820 kubelet[2522]: E1108 00:16:56.396782 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:16:56.421251 kubelet[2522]: I1108 00:16:56.421165 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ckqfd" podStartSLOduration=1.421143172 podStartE2EDuration="1.421143172s" podCreationTimestamp="2025-11-08 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:16:56.410213015 +0000 UTC m=+7.180903837" watchObservedRunningTime="2025-11-08 00:16:56.421143172 +0000 UTC m=+7.191833994" Nov 8 00:16:57.714339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1956328686.mount: Deactivated successfully. Nov 8 00:16:58.040935 containerd[1465]: time="2025-11-08T00:16:58.040793574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:58.041628 containerd[1465]: time="2025-11-08T00:16:58.041548471Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:16:58.042808 containerd[1465]: time="2025-11-08T00:16:58.042767889Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:58.045051 containerd[1465]: time="2025-11-08T00:16:58.045013053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:16:58.045658 containerd[1465]: time="2025-11-08T00:16:58.045623268Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.006355223s" Nov 8 00:16:58.045691 containerd[1465]: time="2025-11-08T00:16:58.045659296Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:16:58.050111 containerd[1465]: time="2025-11-08T00:16:58.050074222Z" level=info msg="CreateContainer within sandbox \"6d28316397d24a279682fd7bfdbd2bc6046f93d5e64bc11fd75f43c240f90c9b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:16:58.065103 containerd[1465]: time="2025-11-08T00:16:58.065056642Z" level=info msg="CreateContainer within sandbox \"6d28316397d24a279682fd7bfdbd2bc6046f93d5e64bc11fd75f43c240f90c9b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab9e7962dc5565e7bb8e377c6af7c1f45543896ca15086458d71102e0c82ef94\"" Nov 8 00:16:58.065614 containerd[1465]: time="2025-11-08T00:16:58.065583491Z" level=info msg="StartContainer for \"ab9e7962dc5565e7bb8e377c6af7c1f45543896ca15086458d71102e0c82ef94\"" Nov 8 00:16:58.099700 systemd[1]: Started cri-containerd-ab9e7962dc5565e7bb8e377c6af7c1f45543896ca15086458d71102e0c82ef94.scope - libcontainer container ab9e7962dc5565e7bb8e377c6af7c1f45543896ca15086458d71102e0c82ef94. Nov 8 00:16:58.127531 containerd[1465]: time="2025-11-08T00:16:58.127481424Z" level=info msg="StartContainer for \"ab9e7962dc5565e7bb8e377c6af7c1f45543896ca15086458d71102e0c82ef94\" returns successfully" Nov 8 00:17:01.303899 update_engine[1455]: I20251108 00:17:01.303767 1455 update_attempter.cc:509] Updating boot flags... Nov 8 00:17:01.372254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2916) Nov 8 00:17:01.451624 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2916) Nov 8 00:17:01.509693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2916) Nov 8 00:17:02.454835 kubelet[2522]: E1108 00:17:02.454725 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:02.462999 kubelet[2522]: I1108 00:17:02.462929 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9hhdv" podStartSLOduration=5.455529551 podStartE2EDuration="7.462912843s" podCreationTimestamp="2025-11-08 00:16:55 +0000 UTC" firstStartedPulling="2025-11-08 00:16:56.038900216 +0000 UTC m=+6.809591038" lastFinishedPulling="2025-11-08 00:16:58.046283507 +0000 UTC m=+8.816974330" observedRunningTime="2025-11-08 00:16:58.404139924 +0000 UTC m=+9.174830746" watchObservedRunningTime="2025-11-08 00:17:02.462912843 +0000 UTC m=+13.233603665" Nov 8 00:17:03.405678 kubelet[2522]: E1108 00:17:03.405617 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:03.570834 sudo[1645]: pam_unix(sudo:session): session closed for user root Nov 8 00:17:03.790269 sshd[1642]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:03.794392 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:17:03.797026 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:60732.service: Deactivated successfully. Nov 8 00:17:03.799698 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:17:03.799925 systemd[1]: session-7.scope: Consumed 7.236s CPU time, 162.2M memory peak, 0B memory swap peak. Nov 8 00:17:03.801078 systemd-logind[1450]: Removed session 7. Nov 8 00:17:07.880031 systemd[1]: Created slice kubepods-besteffort-pod660a2fa1_0de6_4763_99ed_d30ee7759e24.slice - libcontainer container kubepods-besteffort-pod660a2fa1_0de6_4763_99ed_d30ee7759e24.slice. Nov 8 00:17:07.948947 systemd[1]: Created slice kubepods-besteffort-pod91b349b8_bf42_4fa1_94a9_e20415a05e44.slice - libcontainer container kubepods-besteffort-pod91b349b8_bf42_4fa1_94a9_e20415a05e44.slice. Nov 8 00:17:08.059456 kubelet[2522]: I1108 00:17:08.059401 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/660a2fa1-0de6-4763-99ed-d30ee7759e24-typha-certs\") pod \"calico-typha-7f7fb49644-lv8k7\" (UID: \"660a2fa1-0de6-4763-99ed-d30ee7759e24\") " pod="calico-system/calico-typha-7f7fb49644-lv8k7" Nov 8 00:17:08.060028 kubelet[2522]: I1108 00:17:08.059458 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-cni-net-dir\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060028 kubelet[2522]: I1108 00:17:08.059518 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-flexvol-driver-host\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060028 kubelet[2522]: I1108 00:17:08.059539 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-lib-modules\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060028 kubelet[2522]: I1108 00:17:08.059557 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-xtables-lock\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060028 kubelet[2522]: I1108 00:17:08.059606 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660a2fa1-0de6-4763-99ed-d30ee7759e24-tigera-ca-bundle\") pod \"calico-typha-7f7fb49644-lv8k7\" (UID: \"660a2fa1-0de6-4763-99ed-d30ee7759e24\") " pod="calico-system/calico-typha-7f7fb49644-lv8k7" Nov 8 00:17:08.060192 kubelet[2522]: I1108 00:17:08.059626 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-cni-log-dir\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060192 kubelet[2522]: I1108 00:17:08.059664 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg8jp\" (UniqueName: \"kubernetes.io/projected/91b349b8-bf42-4fa1-94a9-e20415a05e44-kube-api-access-pg8jp\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060192 kubelet[2522]: I1108 00:17:08.059698 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/91b349b8-bf42-4fa1-94a9-e20415a05e44-node-certs\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060192 kubelet[2522]: I1108 00:17:08.059722 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91b349b8-bf42-4fa1-94a9-e20415a05e44-tigera-ca-bundle\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060192 kubelet[2522]: I1108 00:17:08.059747 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96c9\" (UniqueName: \"kubernetes.io/projected/660a2fa1-0de6-4763-99ed-d30ee7759e24-kube-api-access-w96c9\") pod \"calico-typha-7f7fb49644-lv8k7\" (UID: \"660a2fa1-0de6-4763-99ed-d30ee7759e24\") " pod="calico-system/calico-typha-7f7fb49644-lv8k7" Nov 8 00:17:08.060329 kubelet[2522]: I1108 00:17:08.059767 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-cni-bin-dir\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060329 kubelet[2522]: I1108 00:17:08.059787 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-var-run-calico\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060329 kubelet[2522]: I1108 00:17:08.059808 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-policysync\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.060329 kubelet[2522]: I1108 00:17:08.059830 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/91b349b8-bf42-4fa1-94a9-e20415a05e44-var-lib-calico\") pod \"calico-node-lpljc\" (UID: \"91b349b8-bf42-4fa1-94a9-e20415a05e44\") " pod="calico-system/calico-node-lpljc" Nov 8 00:17:08.095683 kubelet[2522]: E1108 00:17:08.095628 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:08.169914 kubelet[2522]: E1108 00:17:08.169724 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.169914 kubelet[2522]: W1108 00:17:08.169750 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.173345 kubelet[2522]: E1108 00:17:08.172488 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.173553 kubelet[2522]: E1108 00:17:08.173491 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.173618 kubelet[2522]: W1108 00:17:08.173555 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.173747 kubelet[2522]: E1108 00:17:08.173619 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.178717 kubelet[2522]: E1108 00:17:08.178443 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.178717 kubelet[2522]: W1108 00:17:08.178463 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.178717 kubelet[2522]: E1108 00:17:08.178478 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.181176 kubelet[2522]: E1108 00:17:08.181131 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.181176 kubelet[2522]: W1108 00:17:08.181150 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.181176 kubelet[2522]: E1108 00:17:08.181162 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.184251 kubelet[2522]: E1108 00:17:08.184221 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:08.186097 containerd[1465]: time="2025-11-08T00:17:08.186028100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7fb49644-lv8k7,Uid:660a2fa1-0de6-4763-99ed-d30ee7759e24,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:08.235925 containerd[1465]: time="2025-11-08T00:17:08.235219010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:08.235925 containerd[1465]: time="2025-11-08T00:17:08.235343664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:08.235925 containerd[1465]: time="2025-11-08T00:17:08.235377437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:08.235925 containerd[1465]: time="2025-11-08T00:17:08.235556242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:08.253457 kubelet[2522]: E1108 00:17:08.253418 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:08.254846 containerd[1465]: time="2025-11-08T00:17:08.254407370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpljc,Uid:91b349b8-bf42-4fa1-94a9-e20415a05e44,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:08.261178 kubelet[2522]: E1108 00:17:08.261139 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.261178 kubelet[2522]: W1108 00:17:08.261169 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.261313 kubelet[2522]: E1108 00:17:08.261195 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.261313 kubelet[2522]: I1108 00:17:08.261235 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vg8\" (UniqueName: \"kubernetes.io/projected/485c28c7-3ce9-4d8e-9396-e75393354e2f-kube-api-access-g2vg8\") pod \"csi-node-driver-vdtmc\" (UID: \"485c28c7-3ce9-4d8e-9396-e75393354e2f\") " pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:08.261510 kubelet[2522]: E1108 00:17:08.261496 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.261510 kubelet[2522]: W1108 00:17:08.261508 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.261586 kubelet[2522]: E1108 00:17:08.261518 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.261586 kubelet[2522]: I1108 00:17:08.261539 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/485c28c7-3ce9-4d8e-9396-e75393354e2f-registration-dir\") pod \"csi-node-driver-vdtmc\" (UID: \"485c28c7-3ce9-4d8e-9396-e75393354e2f\") " pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:08.262007 kubelet[2522]: E1108 00:17:08.261978 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.262082 kubelet[2522]: W1108 00:17:08.262034 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.262082 kubelet[2522]: E1108 00:17:08.262075 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.262427 kubelet[2522]: E1108 00:17:08.262408 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.262427 kubelet[2522]: W1108 00:17:08.262422 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.262516 kubelet[2522]: E1108 00:17:08.262433 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.262883 kubelet[2522]: E1108 00:17:08.262701 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.262883 kubelet[2522]: W1108 00:17:08.262714 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.262883 kubelet[2522]: E1108 00:17:08.262724 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.262883 kubelet[2522]: I1108 00:17:08.262755 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/485c28c7-3ce9-4d8e-9396-e75393354e2f-socket-dir\") pod \"csi-node-driver-vdtmc\" (UID: \"485c28c7-3ce9-4d8e-9396-e75393354e2f\") " pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:08.263516 kubelet[2522]: E1108 00:17:08.263467 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.263717 kubelet[2522]: W1108 00:17:08.263592 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.263717 kubelet[2522]: E1108 00:17:08.263610 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.263856 kubelet[2522]: E1108 00:17:08.263843 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.263917 kubelet[2522]: W1108 00:17:08.263905 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.263978 kubelet[2522]: E1108 00:17:08.263966 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.264347 kubelet[2522]: E1108 00:17:08.264333 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.264418 kubelet[2522]: W1108 00:17:08.264406 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.264484 kubelet[2522]: E1108 00:17:08.264469 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.264559 kubelet[2522]: I1108 00:17:08.264543 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/485c28c7-3ce9-4d8e-9396-e75393354e2f-varrun\") pod \"csi-node-driver-vdtmc\" (UID: \"485c28c7-3ce9-4d8e-9396-e75393354e2f\") " pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:08.264816 systemd[1]: Started cri-containerd-44845a1fc64a739f4a869ab4b106a585b4604ce88f851d98881d9464d15052d1.scope - libcontainer container 44845a1fc64a739f4a869ab4b106a585b4604ce88f851d98881d9464d15052d1. Nov 8 00:17:08.264926 kubelet[2522]: E1108 00:17:08.264866 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.264926 kubelet[2522]: W1108 00:17:08.264879 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.264926 kubelet[2522]: E1108 00:17:08.264891 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.265205 kubelet[2522]: E1108 00:17:08.265138 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.265205 kubelet[2522]: W1108 00:17:08.265152 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.265205 kubelet[2522]: E1108 00:17:08.265161 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.265517 kubelet[2522]: E1108 00:17:08.265498 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.265555 kubelet[2522]: W1108 00:17:08.265515 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.265555 kubelet[2522]: E1108 00:17:08.265536 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.265555 kubelet[2522]: I1108 00:17:08.265554 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/485c28c7-3ce9-4d8e-9396-e75393354e2f-kubelet-dir\") pod \"csi-node-driver-vdtmc\" (UID: \"485c28c7-3ce9-4d8e-9396-e75393354e2f\") " pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:08.265955 kubelet[2522]: E1108 00:17:08.265926 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.265955 kubelet[2522]: W1108 00:17:08.265964 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.266037 kubelet[2522]: E1108 00:17:08.265976 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.267302 kubelet[2522]: E1108 00:17:08.266313 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.267302 kubelet[2522]: W1108 00:17:08.266329 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.267302 kubelet[2522]: E1108 00:17:08.266340 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.267302 kubelet[2522]: E1108 00:17:08.266689 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.267302 kubelet[2522]: W1108 00:17:08.266698 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.267302 kubelet[2522]: E1108 00:17:08.266709 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.267486 kubelet[2522]: E1108 00:17:08.267465 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.267486 kubelet[2522]: W1108 00:17:08.267475 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.267486 kubelet[2522]: E1108 00:17:08.267485 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.285745 containerd[1465]: time="2025-11-08T00:17:08.285621071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:08.286030 containerd[1465]: time="2025-11-08T00:17:08.285735326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:08.286030 containerd[1465]: time="2025-11-08T00:17:08.285767266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:08.286030 containerd[1465]: time="2025-11-08T00:17:08.285932696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:08.311802 systemd[1]: Started cri-containerd-0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e.scope - libcontainer container 0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e. Nov 8 00:17:08.313122 containerd[1465]: time="2025-11-08T00:17:08.313026578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7fb49644-lv8k7,Uid:660a2fa1-0de6-4763-99ed-d30ee7759e24,Namespace:calico-system,Attempt:0,} returns sandbox id \"44845a1fc64a739f4a869ab4b106a585b4604ce88f851d98881d9464d15052d1\"" Nov 8 00:17:08.314387 kubelet[2522]: E1108 00:17:08.314352 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:08.317258 containerd[1465]: time="2025-11-08T00:17:08.315479820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:17:08.340329 containerd[1465]: time="2025-11-08T00:17:08.340279889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpljc,Uid:91b349b8-bf42-4fa1-94a9-e20415a05e44,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\"" Nov 8 00:17:08.341070 kubelet[2522]: E1108 00:17:08.341038 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:08.367205 kubelet[2522]: E1108 00:17:08.367175 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.367205 kubelet[2522]: W1108 00:17:08.367194 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.367205 kubelet[2522]: E1108 00:17:08.367215 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.367610 kubelet[2522]: E1108 00:17:08.367590 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.367610 kubelet[2522]: W1108 00:17:08.367605 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.367678 kubelet[2522]: E1108 00:17:08.367617 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.367993 kubelet[2522]: E1108 00:17:08.367972 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.367993 kubelet[2522]: W1108 00:17:08.367990 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.368084 kubelet[2522]: E1108 00:17:08.368008 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.368327 kubelet[2522]: E1108 00:17:08.368311 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.368327 kubelet[2522]: W1108 00:17:08.368326 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.368327 kubelet[2522]: E1108 00:17:08.368339 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.368625 kubelet[2522]: E1108 00:17:08.368610 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.368625 kubelet[2522]: W1108 00:17:08.368623 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.368705 kubelet[2522]: E1108 00:17:08.368635 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.368922 kubelet[2522]: E1108 00:17:08.368904 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.368922 kubelet[2522]: W1108 00:17:08.368919 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.369009 kubelet[2522]: E1108 00:17:08.368929 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.369242 kubelet[2522]: E1108 00:17:08.369222 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.369242 kubelet[2522]: W1108 00:17:08.369238 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.369322 kubelet[2522]: E1108 00:17:08.369252 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.369520 kubelet[2522]: E1108 00:17:08.369506 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.369520 kubelet[2522]: W1108 00:17:08.369517 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.369607 kubelet[2522]: E1108 00:17:08.369527 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.369773 kubelet[2522]: E1108 00:17:08.369759 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.369773 kubelet[2522]: W1108 00:17:08.369771 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.369856 kubelet[2522]: E1108 00:17:08.369781 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.370066 kubelet[2522]: E1108 00:17:08.370042 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.370107 kubelet[2522]: W1108 00:17:08.370065 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.370107 kubelet[2522]: E1108 00:17:08.370076 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.371754 kubelet[2522]: E1108 00:17:08.371730 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.371754 kubelet[2522]: W1108 00:17:08.371750 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.371841 kubelet[2522]: E1108 00:17:08.371764 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.372095 kubelet[2522]: E1108 00:17:08.372066 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.372095 kubelet[2522]: W1108 00:17:08.372082 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.372095 kubelet[2522]: E1108 00:17:08.372094 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.372446 kubelet[2522]: E1108 00:17:08.372423 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.372446 kubelet[2522]: W1108 00:17:08.372442 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.372563 kubelet[2522]: E1108 00:17:08.372489 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.372957 kubelet[2522]: E1108 00:17:08.372934 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.372957 kubelet[2522]: W1108 00:17:08.372953 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.373035 kubelet[2522]: E1108 00:17:08.372963 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.373245 kubelet[2522]: E1108 00:17:08.373224 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.373245 kubelet[2522]: W1108 00:17:08.373238 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.373367 kubelet[2522]: E1108 00:17:08.373251 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.373589 kubelet[2522]: E1108 00:17:08.373557 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.373659 kubelet[2522]: W1108 00:17:08.373597 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.373659 kubelet[2522]: E1108 00:17:08.373611 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.373890 kubelet[2522]: E1108 00:17:08.373860 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.373890 kubelet[2522]: W1108 00:17:08.373875 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.373890 kubelet[2522]: E1108 00:17:08.373885 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.374120 kubelet[2522]: E1108 00:17:08.374099 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.374120 kubelet[2522]: W1108 00:17:08.374111 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.374120 kubelet[2522]: E1108 00:17:08.374119 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.374362 kubelet[2522]: E1108 00:17:08.374345 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.374362 kubelet[2522]: W1108 00:17:08.374356 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.374362 kubelet[2522]: E1108 00:17:08.374365 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.374710 kubelet[2522]: E1108 00:17:08.374662 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.374875 kubelet[2522]: W1108 00:17:08.374781 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.374875 kubelet[2522]: E1108 00:17:08.374799 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.375165 kubelet[2522]: E1108 00:17:08.375146 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.375165 kubelet[2522]: W1108 00:17:08.375159 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.375165 kubelet[2522]: E1108 00:17:08.375170 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.375415 kubelet[2522]: E1108 00:17:08.375395 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.375415 kubelet[2522]: W1108 00:17:08.375407 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.375415 kubelet[2522]: E1108 00:17:08.375415 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.375689 kubelet[2522]: E1108 00:17:08.375669 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.375689 kubelet[2522]: W1108 00:17:08.375682 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.375689 kubelet[2522]: E1108 00:17:08.375691 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.375955 kubelet[2522]: E1108 00:17:08.375938 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.375955 kubelet[2522]: W1108 00:17:08.375949 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.375955 kubelet[2522]: E1108 00:17:08.375958 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.376205 kubelet[2522]: E1108 00:17:08.376188 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.376205 kubelet[2522]: W1108 00:17:08.376199 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.376205 kubelet[2522]: E1108 00:17:08.376208 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:08.379309 kubelet[2522]: E1108 00:17:08.379285 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:08.379309 kubelet[2522]: W1108 00:17:08.379303 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:08.379383 kubelet[2522]: E1108 00:17:08.379318 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:09.363710 kubelet[2522]: E1108 00:17:09.363658 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:10.069500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815638978.mount: Deactivated successfully. Nov 8 00:17:10.747343 containerd[1465]: time="2025-11-08T00:17:10.747292345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:10.748399 containerd[1465]: time="2025-11-08T00:17:10.748347265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:17:10.749525 containerd[1465]: time="2025-11-08T00:17:10.749488396Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:10.751652 containerd[1465]: time="2025-11-08T00:17:10.751619023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:10.752409 containerd[1465]: time="2025-11-08T00:17:10.752364572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.436844806s" Nov 8 00:17:10.752473 containerd[1465]: time="2025-11-08T00:17:10.752411550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:17:10.753353 containerd[1465]: time="2025-11-08T00:17:10.753322229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:17:10.774873 containerd[1465]: time="2025-11-08T00:17:10.774825880Z" level=info msg="CreateContainer within sandbox \"44845a1fc64a739f4a869ab4b106a585b4604ce88f851d98881d9464d15052d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:17:10.792348 containerd[1465]: time="2025-11-08T00:17:10.792307827Z" level=info msg="CreateContainer within sandbox \"44845a1fc64a739f4a869ab4b106a585b4604ce88f851d98881d9464d15052d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"19ed1966ff4ab9ad44ff28db60d762b078c7bd7cc055a89cbe4c764ed4fc1c42\"" Nov 8 00:17:10.792775 containerd[1465]: time="2025-11-08T00:17:10.792749175Z" level=info msg="StartContainer for \"19ed1966ff4ab9ad44ff28db60d762b078c7bd7cc055a89cbe4c764ed4fc1c42\"" Nov 8 00:17:10.827760 systemd[1]: Started cri-containerd-19ed1966ff4ab9ad44ff28db60d762b078c7bd7cc055a89cbe4c764ed4fc1c42.scope - libcontainer container 19ed1966ff4ab9ad44ff28db60d762b078c7bd7cc055a89cbe4c764ed4fc1c42. Nov 8 00:17:10.877117 containerd[1465]: time="2025-11-08T00:17:10.877058695Z" level=info msg="StartContainer for \"19ed1966ff4ab9ad44ff28db60d762b078c7bd7cc055a89cbe4c764ed4fc1c42\" returns successfully" Nov 8 00:17:11.366119 kubelet[2522]: E1108 00:17:11.366031 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:11.423406 kubelet[2522]: E1108 00:17:11.423368 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:11.481286 kubelet[2522]: E1108 00:17:11.481236 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.481286 kubelet[2522]: W1108 00:17:11.481266 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.481473 kubelet[2522]: E1108 00:17:11.481297 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.481551 kubelet[2522]: E1108 00:17:11.481523 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.481551 kubelet[2522]: W1108 00:17:11.481539 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.481551 kubelet[2522]: E1108 00:17:11.481550 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.481806 kubelet[2522]: E1108 00:17:11.481778 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.481806 kubelet[2522]: W1108 00:17:11.481796 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.481865 kubelet[2522]: E1108 00:17:11.481808 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.482122 kubelet[2522]: E1108 00:17:11.482071 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.482122 kubelet[2522]: W1108 00:17:11.482102 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.482122 kubelet[2522]: E1108 00:17:11.482116 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.482353 kubelet[2522]: E1108 00:17:11.482323 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.482353 kubelet[2522]: W1108 00:17:11.482340 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.482353 kubelet[2522]: E1108 00:17:11.482350 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.482532 kubelet[2522]: E1108 00:17:11.482516 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.482532 kubelet[2522]: W1108 00:17:11.482526 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.482532 kubelet[2522]: E1108 00:17:11.482534 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.482738 kubelet[2522]: E1108 00:17:11.482721 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.482738 kubelet[2522]: W1108 00:17:11.482732 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.482799 kubelet[2522]: E1108 00:17:11.482740 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.482947 kubelet[2522]: E1108 00:17:11.482928 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.482947 kubelet[2522]: W1108 00:17:11.482942 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.482997 kubelet[2522]: E1108 00:17:11.482953 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.483222 kubelet[2522]: E1108 00:17:11.483192 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.483222 kubelet[2522]: W1108 00:17:11.483209 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.483222 kubelet[2522]: E1108 00:17:11.483220 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.483451 kubelet[2522]: E1108 00:17:11.483430 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.483451 kubelet[2522]: W1108 00:17:11.483446 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.483498 kubelet[2522]: E1108 00:17:11.483458 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.483694 kubelet[2522]: E1108 00:17:11.483675 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.483694 kubelet[2522]: W1108 00:17:11.483688 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.483749 kubelet[2522]: E1108 00:17:11.483699 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.483897 kubelet[2522]: E1108 00:17:11.483880 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.483897 kubelet[2522]: W1108 00:17:11.483891 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.483946 kubelet[2522]: E1108 00:17:11.483899 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.484113 kubelet[2522]: E1108 00:17:11.484080 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.484113 kubelet[2522]: W1108 00:17:11.484107 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.484173 kubelet[2522]: E1108 00:17:11.484117 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.484328 kubelet[2522]: E1108 00:17:11.484309 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.484328 kubelet[2522]: W1108 00:17:11.484323 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.484381 kubelet[2522]: E1108 00:17:11.484334 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.484546 kubelet[2522]: E1108 00:17:11.484527 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.484546 kubelet[2522]: W1108 00:17:11.484541 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.484735 kubelet[2522]: E1108 00:17:11.484552 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.494961 kubelet[2522]: E1108 00:17:11.494922 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.494961 kubelet[2522]: W1108 00:17:11.494940 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.494961 kubelet[2522]: E1108 00:17:11.494954 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.495259 kubelet[2522]: E1108 00:17:11.495225 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.495259 kubelet[2522]: W1108 00:17:11.495240 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.495332 kubelet[2522]: E1108 00:17:11.495266 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.495619 kubelet[2522]: E1108 00:17:11.495588 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.495619 kubelet[2522]: W1108 00:17:11.495606 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.495672 kubelet[2522]: E1108 00:17:11.495619 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.495942 kubelet[2522]: E1108 00:17:11.495925 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.495942 kubelet[2522]: W1108 00:17:11.495937 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.496012 kubelet[2522]: E1108 00:17:11.495946 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.496245 kubelet[2522]: E1108 00:17:11.496216 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.496245 kubelet[2522]: W1108 00:17:11.496237 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.496300 kubelet[2522]: E1108 00:17:11.496249 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.496529 kubelet[2522]: E1108 00:17:11.496510 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.496529 kubelet[2522]: W1108 00:17:11.496525 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.496611 kubelet[2522]: E1108 00:17:11.496535 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.496928 kubelet[2522]: E1108 00:17:11.496903 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.496928 kubelet[2522]: W1108 00:17:11.496917 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.496928 kubelet[2522]: E1108 00:17:11.496927 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.497211 kubelet[2522]: E1108 00:17:11.497183 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.497211 kubelet[2522]: W1108 00:17:11.497200 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.497281 kubelet[2522]: E1108 00:17:11.497211 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.497465 kubelet[2522]: E1108 00:17:11.497450 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.497465 kubelet[2522]: W1108 00:17:11.497462 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.497511 kubelet[2522]: E1108 00:17:11.497471 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.497715 kubelet[2522]: E1108 00:17:11.497700 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.497715 kubelet[2522]: W1108 00:17:11.497712 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.497787 kubelet[2522]: E1108 00:17:11.497721 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.497966 kubelet[2522]: E1108 00:17:11.497947 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.497966 kubelet[2522]: W1108 00:17:11.497959 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.497966 kubelet[2522]: E1108 00:17:11.497968 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.498466 kubelet[2522]: E1108 00:17:11.498423 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.498466 kubelet[2522]: W1108 00:17:11.498452 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.498528 kubelet[2522]: E1108 00:17:11.498480 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.498819 kubelet[2522]: E1108 00:17:11.498791 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.498819 kubelet[2522]: W1108 00:17:11.498804 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.498888 kubelet[2522]: E1108 00:17:11.498843 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.499152 kubelet[2522]: E1108 00:17:11.499121 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.499152 kubelet[2522]: W1108 00:17:11.499135 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.499152 kubelet[2522]: E1108 00:17:11.499145 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.499382 kubelet[2522]: E1108 00:17:11.499358 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.499382 kubelet[2522]: W1108 00:17:11.499371 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.499429 kubelet[2522]: E1108 00:17:11.499380 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.499709 kubelet[2522]: E1108 00:17:11.499684 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.499709 kubelet[2522]: W1108 00:17:11.499698 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.499709 kubelet[2522]: E1108 00:17:11.499708 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.499968 kubelet[2522]: E1108 00:17:11.499951 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.499968 kubelet[2522]: W1108 00:17:11.499963 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.500030 kubelet[2522]: E1108 00:17:11.499972 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:11.500253 kubelet[2522]: E1108 00:17:11.500237 2522 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:17:11.500253 kubelet[2522]: W1108 00:17:11.500249 2522 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:17:11.500298 kubelet[2522]: E1108 00:17:11.500259 2522 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:17:12.335821 containerd[1465]: time="2025-11-08T00:17:12.335746675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:12.336590 containerd[1465]: time="2025-11-08T00:17:12.336532951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:17:12.337926 containerd[1465]: time="2025-11-08T00:17:12.337885969Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:12.340260 containerd[1465]: time="2025-11-08T00:17:12.340213515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:12.340759 containerd[1465]: time="2025-11-08T00:17:12.340719334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.587362039s" Nov 8 00:17:12.340759 containerd[1465]: time="2025-11-08T00:17:12.340755031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:17:12.345541 containerd[1465]: time="2025-11-08T00:17:12.345493550Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:17:12.362300 containerd[1465]: time="2025-11-08T00:17:12.362022618Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b\"" Nov 8 00:17:12.362935 containerd[1465]: time="2025-11-08T00:17:12.362883624Z" level=info msg="StartContainer for \"3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b\"" Nov 8 00:17:12.403745 systemd[1]: Started cri-containerd-3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b.scope - libcontainer container 3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b. Nov 8 00:17:12.448078 kubelet[2522]: I1108 00:17:12.448023 2522 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:17:12.448904 containerd[1465]: time="2025-11-08T00:17:12.448864210Z" level=info msg="StartContainer for \"3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b\" returns successfully" Nov 8 00:17:12.452642 kubelet[2522]: E1108 00:17:12.452549 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:12.465038 systemd[1]: cri-containerd-3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b.scope: Deactivated successfully. Nov 8 00:17:12.766462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b-rootfs.mount: Deactivated successfully. Nov 8 00:17:12.773366 containerd[1465]: time="2025-11-08T00:17:12.773277509Z" level=info msg="shim disconnected" id=3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b namespace=k8s.io Nov 8 00:17:12.773366 containerd[1465]: time="2025-11-08T00:17:12.773363911Z" level=warning msg="cleaning up after shim disconnected" id=3f20e9cfe9871a7a073b7dc1294f80473d815787828e1f71d9f097036bf1324b namespace=k8s.io Nov 8 00:17:12.773634 containerd[1465]: time="2025-11-08T00:17:12.773379250Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:17:13.364548 kubelet[2522]: E1108 00:17:13.364466 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:13.442968 kubelet[2522]: E1108 00:17:13.442915 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:13.447491 containerd[1465]: time="2025-11-08T00:17:13.447437865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:17:13.470728 kubelet[2522]: I1108 00:17:13.470638 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f7fb49644-lv8k7" podStartSLOduration=4.032570206 podStartE2EDuration="6.470619111s" podCreationTimestamp="2025-11-08 00:17:07 +0000 UTC" firstStartedPulling="2025-11-08 00:17:08.315072737 +0000 UTC m=+19.085763569" lastFinishedPulling="2025-11-08 00:17:10.753121651 +0000 UTC m=+21.523812474" observedRunningTime="2025-11-08 00:17:11.435439646 +0000 UTC m=+22.206130468" watchObservedRunningTime="2025-11-08 00:17:13.470619111 +0000 UTC m=+24.241309933" Nov 8 00:17:15.364233 kubelet[2522]: E1108 00:17:15.364175 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:17.399567 kubelet[2522]: E1108 00:17:17.399506 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:17.428950 containerd[1465]: time="2025-11-08T00:17:17.428886278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:17.430493 containerd[1465]: time="2025-11-08T00:17:17.430435083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:17:17.431728 containerd[1465]: time="2025-11-08T00:17:17.431689095Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:17.436075 containerd[1465]: time="2025-11-08T00:17:17.436033845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:17.436772 containerd[1465]: time="2025-11-08T00:17:17.436742434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.989256299s" Nov 8 00:17:17.436832 containerd[1465]: time="2025-11-08T00:17:17.436778281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:17:17.441609 containerd[1465]: time="2025-11-08T00:17:17.441559700Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:17:17.464986 containerd[1465]: time="2025-11-08T00:17:17.464932594Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900\"" Nov 8 00:17:17.467270 containerd[1465]: time="2025-11-08T00:17:17.465445395Z" level=info msg="StartContainer for \"058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900\"" Nov 8 00:17:17.500714 systemd[1]: Started cri-containerd-058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900.scope - libcontainer container 058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900. Nov 8 00:17:17.533110 containerd[1465]: time="2025-11-08T00:17:17.533069906Z" level=info msg="StartContainer for \"058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900\" returns successfully" Nov 8 00:17:18.455102 kubelet[2522]: E1108 00:17:18.455056 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:18.526001 containerd[1465]: time="2025-11-08T00:17:18.525936729Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:17:18.529915 systemd[1]: cri-containerd-058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900.scope: Deactivated successfully. Nov 8 00:17:18.548657 kubelet[2522]: I1108 00:17:18.548616 2522 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:17:18.553409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900-rootfs.mount: Deactivated successfully. Nov 8 00:17:18.841936 containerd[1465]: time="2025-11-08T00:17:18.841563483Z" level=info msg="shim disconnected" id=058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900 namespace=k8s.io Nov 8 00:17:18.841936 containerd[1465]: time="2025-11-08T00:17:18.841657069Z" level=warning msg="cleaning up after shim disconnected" id=058315b69c6d9b772114a2118f56ce5520ecb0157a70e341fbf96208871c4900 namespace=k8s.io Nov 8 00:17:18.841936 containerd[1465]: time="2025-11-08T00:17:18.841670233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:17:18.842818 kubelet[2522]: I1108 00:17:18.842765 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7nj\" (UniqueName: \"kubernetes.io/projected/0a4ccd10-3267-4054-9505-eb9db275a87f-kube-api-access-tz7nj\") pod \"calico-kube-controllers-5f7468b9b5-s94kb\" (UID: \"0a4ccd10-3267-4054-9505-eb9db275a87f\") " pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" Nov 8 00:17:18.843043 kubelet[2522]: I1108 00:17:18.842902 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a4ccd10-3267-4054-9505-eb9db275a87f-tigera-ca-bundle\") pod \"calico-kube-controllers-5f7468b9b5-s94kb\" (UID: \"0a4ccd10-3267-4054-9505-eb9db275a87f\") " pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" Nov 8 00:17:18.850115 systemd[1]: Created slice kubepods-besteffort-pod0a4ccd10_3267_4054_9505_eb9db275a87f.slice - libcontainer container kubepods-besteffort-pod0a4ccd10_3267_4054_9505_eb9db275a87f.slice. Nov 8 00:17:18.864455 systemd[1]: Created slice kubepods-besteffort-pod7f4fec81_40b7_4cbb_9ed0_146aff61e0a7.slice - libcontainer container kubepods-besteffort-pod7f4fec81_40b7_4cbb_9ed0_146aff61e0a7.slice. Nov 8 00:17:18.877051 systemd[1]: Created slice kubepods-besteffort-pod89b15836_7628_4868_bd25_c2735fc5d488.slice - libcontainer container kubepods-besteffort-pod89b15836_7628_4868_bd25_c2735fc5d488.slice. Nov 8 00:17:18.889408 systemd[1]: Created slice kubepods-burstable-pod60f79ede_23ce_4941_9970_af1b19912c40.slice - libcontainer container kubepods-burstable-pod60f79ede_23ce_4941_9970_af1b19912c40.slice. Nov 8 00:17:18.899622 systemd[1]: Created slice kubepods-besteffort-pod0ffc1fd0_1ed9_42f4_a06d_067b691194ce.slice - libcontainer container kubepods-besteffort-pod0ffc1fd0_1ed9_42f4_a06d_067b691194ce.slice. Nov 8 00:17:18.905793 systemd[1]: Created slice kubepods-burstable-pod67db3da6_15c7_4668_a4a2_6c4899b5791f.slice - libcontainer container kubepods-burstable-pod67db3da6_15c7_4668_a4a2_6c4899b5791f.slice. Nov 8 00:17:18.912029 systemd[1]: Created slice kubepods-besteffort-pod031c3221_a127_47c9_883a_8bd9e65d7753.slice - libcontainer container kubepods-besteffort-pod031c3221_a127_47c9_883a_8bd9e65d7753.slice. Nov 8 00:17:19.044721 kubelet[2522]: I1108 00:17:19.044627 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f4fec81-40b7-4cbb-9ed0-146aff61e0a7-goldmane-ca-bundle\") pod \"goldmane-666569f655-2fwvz\" (UID: \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\") " pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.044911 kubelet[2522]: I1108 00:17:19.044781 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-ca-bundle\") pod \"whisker-cd5855f48-b5fpd\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " pod="calico-system/whisker-cd5855f48-b5fpd" Nov 8 00:17:19.044911 kubelet[2522]: I1108 00:17:19.044857 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/031c3221-a127-47c9-883a-8bd9e65d7753-calico-apiserver-certs\") pod \"calico-apiserver-8577fdc947-lzmrt\" (UID: \"031c3221-a127-47c9-883a-8bd9e65d7753\") " pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" Nov 8 00:17:19.044961 kubelet[2522]: I1108 00:17:19.044938 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-backend-key-pair\") pod \"whisker-cd5855f48-b5fpd\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " pod="calico-system/whisker-cd5855f48-b5fpd" Nov 8 00:17:19.044994 kubelet[2522]: I1108 00:17:19.044969 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60f79ede-23ce-4941-9970-af1b19912c40-config-volume\") pod \"coredns-674b8bbfcf-sdjt5\" (UID: \"60f79ede-23ce-4941-9970-af1b19912c40\") " pod="kube-system/coredns-674b8bbfcf-sdjt5" Nov 8 00:17:19.045039 kubelet[2522]: I1108 00:17:19.044996 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7xq\" (UniqueName: \"kubernetes.io/projected/60f79ede-23ce-4941-9970-af1b19912c40-kube-api-access-dn7xq\") pod \"coredns-674b8bbfcf-sdjt5\" (UID: \"60f79ede-23ce-4941-9970-af1b19912c40\") " pod="kube-system/coredns-674b8bbfcf-sdjt5" Nov 8 00:17:19.045039 kubelet[2522]: I1108 00:17:19.045016 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5vg\" (UniqueName: \"kubernetes.io/projected/7f4fec81-40b7-4cbb-9ed0-146aff61e0a7-kube-api-access-lh5vg\") pod \"goldmane-666569f655-2fwvz\" (UID: \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\") " pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.045229 kubelet[2522]: I1108 00:17:19.045101 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fxk4\" (UniqueName: \"kubernetes.io/projected/67db3da6-15c7-4668-a4a2-6c4899b5791f-kube-api-access-7fxk4\") pod \"coredns-674b8bbfcf-kqptp\" (UID: \"67db3da6-15c7-4668-a4a2-6c4899b5791f\") " pod="kube-system/coredns-674b8bbfcf-kqptp" Nov 8 00:17:19.045286 kubelet[2522]: I1108 00:17:19.045255 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw59n\" (UniqueName: \"kubernetes.io/projected/031c3221-a127-47c9-883a-8bd9e65d7753-kube-api-access-lw59n\") pod \"calico-apiserver-8577fdc947-lzmrt\" (UID: \"031c3221-a127-47c9-883a-8bd9e65d7753\") " pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" Nov 8 00:17:19.045324 kubelet[2522]: I1108 00:17:19.045287 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwgc\" (UniqueName: \"kubernetes.io/projected/89b15836-7628-4868-bd25-c2735fc5d488-kube-api-access-7dwgc\") pod \"calico-apiserver-8577fdc947-456nr\" (UID: \"89b15836-7628-4868-bd25-c2735fc5d488\") " pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" Nov 8 00:17:19.045351 kubelet[2522]: I1108 00:17:19.045317 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f4fec81-40b7-4cbb-9ed0-146aff61e0a7-config\") pod \"goldmane-666569f655-2fwvz\" (UID: \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\") " pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.045378 kubelet[2522]: I1108 00:17:19.045350 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7f4fec81-40b7-4cbb-9ed0-146aff61e0a7-goldmane-key-pair\") pod \"goldmane-666569f655-2fwvz\" (UID: \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\") " pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.045404 kubelet[2522]: I1108 00:17:19.045374 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67db3da6-15c7-4668-a4a2-6c4899b5791f-config-volume\") pod \"coredns-674b8bbfcf-kqptp\" (UID: \"67db3da6-15c7-4668-a4a2-6c4899b5791f\") " pod="kube-system/coredns-674b8bbfcf-kqptp" Nov 8 00:17:19.045404 kubelet[2522]: I1108 00:17:19.045395 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89b15836-7628-4868-bd25-c2735fc5d488-calico-apiserver-certs\") pod \"calico-apiserver-8577fdc947-456nr\" (UID: \"89b15836-7628-4868-bd25-c2735fc5d488\") " pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" Nov 8 00:17:19.045459 kubelet[2522]: I1108 00:17:19.045426 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867jh\" (UniqueName: \"kubernetes.io/projected/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-kube-api-access-867jh\") pod \"whisker-cd5855f48-b5fpd\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " pod="calico-system/whisker-cd5855f48-b5fpd" Nov 8 00:17:19.203655 kubelet[2522]: E1108 00:17:19.199272 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:19.210057 containerd[1465]: time="2025-11-08T00:17:19.210015048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdjt5,Uid:60f79ede-23ce-4941-9970-af1b19912c40,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:19.210459 containerd[1465]: time="2025-11-08T00:17:19.210014998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd5855f48-b5fpd,Uid:0ffc1fd0-1ed9-42f4-a06d-067b691194ce,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:19.211290 kubelet[2522]: E1108 00:17:19.211259 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:19.211605 containerd[1465]: time="2025-11-08T00:17:19.211561379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqptp,Uid:67db3da6-15c7-4668-a4a2-6c4899b5791f,Namespace:kube-system,Attempt:0,}" Nov 8 00:17:19.214535 containerd[1465]: time="2025-11-08T00:17:19.214504700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-lzmrt,Uid:031c3221-a127-47c9-883a-8bd9e65d7753,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:17:19.371176 systemd[1]: Created slice kubepods-besteffort-pod485c28c7_3ce9_4d8e_9396_e75393354e2f.slice - libcontainer container kubepods-besteffort-pod485c28c7_3ce9_4d8e_9396_e75393354e2f.slice. Nov 8 00:17:19.374236 containerd[1465]: time="2025-11-08T00:17:19.374174710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdtmc,Uid:485c28c7-3ce9-4d8e-9396-e75393354e2f,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:19.456729 containerd[1465]: time="2025-11-08T00:17:19.456547440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7468b9b5-s94kb,Uid:0a4ccd10-3267-4054-9505-eb9db275a87f,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:19.459251 kubelet[2522]: E1108 00:17:19.458895 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:19.460143 containerd[1465]: time="2025-11-08T00:17:19.460102318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:17:19.475090 containerd[1465]: time="2025-11-08T00:17:19.475041079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2fwvz,Uid:7f4fec81-40b7-4cbb-9ed0-146aff61e0a7,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:19.489201 containerd[1465]: time="2025-11-08T00:17:19.489133853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-456nr,Uid:89b15836-7628-4868-bd25-c2735fc5d488,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:17:19.825850 containerd[1465]: time="2025-11-08T00:17:19.825691511Z" level=error msg="Failed to destroy network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841134 containerd[1465]: time="2025-11-08T00:17:19.840734086Z" level=error msg="Failed to destroy network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841134 containerd[1465]: time="2025-11-08T00:17:19.840936786Z" level=error msg="encountered an error cleaning up failed sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841134 containerd[1465]: time="2025-11-08T00:17:19.841022286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-lzmrt,Uid:031c3221-a127-47c9-883a-8bd9e65d7753,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841318 containerd[1465]: time="2025-11-08T00:17:19.841207364Z" level=error msg="encountered an error cleaning up failed sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841318 containerd[1465]: time="2025-11-08T00:17:19.841275332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdtmc,Uid:485c28c7-3ce9-4d8e-9396-e75393354e2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841370 kubelet[2522]: E1108 00:17:19.841227 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.841370 kubelet[2522]: E1108 00:17:19.841290 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" Nov 8 00:17:19.841370 kubelet[2522]: E1108 00:17:19.841313 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" Nov 8 00:17:19.841465 kubelet[2522]: E1108 00:17:19.841359 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8577fdc947-lzmrt_calico-apiserver(031c3221-a127-47c9-883a-8bd9e65d7753)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8577fdc947-lzmrt_calico-apiserver(031c3221-a127-47c9-883a-8bd9e65d7753)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:19.842316 kubelet[2522]: E1108 00:17:19.842282 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.842366 kubelet[2522]: E1108 00:17:19.842320 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:19.842366 kubelet[2522]: E1108 00:17:19.842336 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vdtmc" Nov 8 00:17:19.842418 kubelet[2522]: E1108 00:17:19.842363 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:19.845205 containerd[1465]: time="2025-11-08T00:17:19.845153776Z" level=error msg="Failed to destroy network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.854018 containerd[1465]: time="2025-11-08T00:17:19.853817918Z" level=error msg="encountered an error cleaning up failed sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.854018 containerd[1465]: time="2025-11-08T00:17:19.853907315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdjt5,Uid:60f79ede-23ce-4941-9970-af1b19912c40,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.854154 kubelet[2522]: E1108 00:17:19.854108 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.854215 kubelet[2522]: E1108 00:17:19.854161 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sdjt5" Nov 8 00:17:19.854215 kubelet[2522]: E1108 00:17:19.854181 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sdjt5" Nov 8 00:17:19.854274 containerd[1465]: time="2025-11-08T00:17:19.854127169Z" level=error msg="Failed to destroy network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.854322 kubelet[2522]: E1108 00:17:19.854235 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sdjt5_kube-system(60f79ede-23ce-4941-9970-af1b19912c40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sdjt5_kube-system(60f79ede-23ce-4941-9970-af1b19912c40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sdjt5" podUID="60f79ede-23ce-4941-9970-af1b19912c40" Nov 8 00:17:19.854841 containerd[1465]: time="2025-11-08T00:17:19.854804819Z" level=error msg="encountered an error cleaning up failed sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.855011 containerd[1465]: time="2025-11-08T00:17:19.854889669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd5855f48-b5fpd,Uid:0ffc1fd0-1ed9-42f4-a06d-067b691194ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.855201 kubelet[2522]: E1108 00:17:19.855086 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.855273 kubelet[2522]: E1108 00:17:19.855209 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd5855f48-b5fpd" Nov 8 00:17:19.855273 kubelet[2522]: E1108 00:17:19.855227 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd5855f48-b5fpd" Nov 8 00:17:19.855273 kubelet[2522]: E1108 00:17:19.855261 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cd5855f48-b5fpd_calico-system(0ffc1fd0-1ed9-42f4-a06d-067b691194ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cd5855f48-b5fpd_calico-system(0ffc1fd0-1ed9-42f4-a06d-067b691194ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd5855f48-b5fpd" podUID="0ffc1fd0-1ed9-42f4-a06d-067b691194ce" Nov 8 00:17:19.860043 containerd[1465]: time="2025-11-08T00:17:19.859998882Z" level=error msg="Failed to destroy network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.860648 containerd[1465]: time="2025-11-08T00:17:19.860604077Z" level=error msg="encountered an error cleaning up failed sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.860827 containerd[1465]: time="2025-11-08T00:17:19.860739231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqptp,Uid:67db3da6-15c7-4668-a4a2-6c4899b5791f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.860977 kubelet[2522]: E1108 00:17:19.860927 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.861077 kubelet[2522]: E1108 00:17:19.861046 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqptp" Nov 8 00:17:19.861117 kubelet[2522]: E1108 00:17:19.861080 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kqptp" Nov 8 00:17:19.861167 kubelet[2522]: E1108 00:17:19.861138 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kqptp_kube-system(67db3da6-15c7-4668-a4a2-6c4899b5791f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kqptp_kube-system(67db3da6-15c7-4668-a4a2-6c4899b5791f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kqptp" podUID="67db3da6-15c7-4668-a4a2-6c4899b5791f" Nov 8 00:17:19.868629 containerd[1465]: time="2025-11-08T00:17:19.868535825Z" level=error msg="Failed to destroy network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.872088 containerd[1465]: time="2025-11-08T00:17:19.872037704Z" level=error msg="encountered an error cleaning up failed sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.872354 containerd[1465]: time="2025-11-08T00:17:19.872218964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2fwvz,Uid:7f4fec81-40b7-4cbb-9ed0-146aff61e0a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.872865 kubelet[2522]: E1108 00:17:19.872814 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.873112 kubelet[2522]: E1108 00:17:19.873066 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.873279 kubelet[2522]: E1108 00:17:19.873206 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2fwvz" Nov 8 00:17:19.873515 kubelet[2522]: E1108 00:17:19.873398 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2fwvz_calico-system(7f4fec81-40b7-4cbb-9ed0-146aff61e0a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2fwvz_calico-system(7f4fec81-40b7-4cbb-9ed0-146aff61e0a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:19.877860 containerd[1465]: time="2025-11-08T00:17:19.877786167Z" level=error msg="Failed to destroy network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.878615 containerd[1465]: time="2025-11-08T00:17:19.878586608Z" level=error msg="encountered an error cleaning up failed sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.878662 containerd[1465]: time="2025-11-08T00:17:19.878644416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-456nr,Uid:89b15836-7628-4868-bd25-c2735fc5d488,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.878922 kubelet[2522]: E1108 00:17:19.878879 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.878999 kubelet[2522]: E1108 00:17:19.878949 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" Nov 8 00:17:19.878999 kubelet[2522]: E1108 00:17:19.878975 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" Nov 8 00:17:19.879069 kubelet[2522]: E1108 00:17:19.879036 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8577fdc947-456nr_calico-apiserver(89b15836-7628-4868-bd25-c2735fc5d488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8577fdc947-456nr_calico-apiserver(89b15836-7628-4868-bd25-c2735fc5d488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:19.884878 containerd[1465]: time="2025-11-08T00:17:19.884846490Z" level=error msg="Failed to destroy network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.885236 containerd[1465]: time="2025-11-08T00:17:19.885204111Z" level=error msg="encountered an error cleaning up failed sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.885310 containerd[1465]: time="2025-11-08T00:17:19.885290222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7468b9b5-s94kb,Uid:0a4ccd10-3267-4054-9505-eb9db275a87f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.885553 kubelet[2522]: E1108 00:17:19.885503 2522 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:19.885603 kubelet[2522]: E1108 00:17:19.885584 2522 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" Nov 8 00:17:19.885636 kubelet[2522]: E1108 00:17:19.885609 2522 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" Nov 8 00:17:19.885690 kubelet[2522]: E1108 00:17:19.885660 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f7468b9b5-s94kb_calico-system(0a4ccd10-3267-4054-9505-eb9db275a87f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f7468b9b5-s94kb_calico-system(0a4ccd10-3267-4054-9505-eb9db275a87f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:17:20.460852 kubelet[2522]: I1108 00:17:20.460800 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:20.461976 kubelet[2522]: I1108 00:17:20.461952 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:20.463602 kubelet[2522]: I1108 00:17:20.463518 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:20.464844 kubelet[2522]: I1108 00:17:20.464815 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:20.476144 containerd[1465]: time="2025-11-08T00:17:20.475658000Z" level=info msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" Nov 8 00:17:20.476144 containerd[1465]: time="2025-11-08T00:17:20.475850872Z" level=info msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" Nov 8 00:17:20.476144 containerd[1465]: time="2025-11-08T00:17:20.476104558Z" level=info msg="Ensure that sandbox 974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06 in task-service has been cleanup successfully" Nov 8 00:17:20.476585 containerd[1465]: time="2025-11-08T00:17:20.476474942Z" level=info msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" Nov 8 00:17:20.476683 containerd[1465]: time="2025-11-08T00:17:20.476650081Z" level=info msg="Ensure that sandbox 7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089 in task-service has been cleanup successfully" Nov 8 00:17:20.477509 containerd[1465]: time="2025-11-08T00:17:20.477463386Z" level=info msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" Nov 8 00:17:20.478059 containerd[1465]: time="2025-11-08T00:17:20.477740305Z" level=info msg="Ensure that sandbox 9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb in task-service has been cleanup successfully" Nov 8 00:17:20.480882 kubelet[2522]: I1108 00:17:20.480841 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:20.488369 containerd[1465]: time="2025-11-08T00:17:20.486813926Z" level=info msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" Nov 8 00:17:20.488369 containerd[1465]: time="2025-11-08T00:17:20.486868017Z" level=info msg="Ensure that sandbox a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc in task-service has been cleanup successfully" Nov 8 00:17:20.488369 containerd[1465]: time="2025-11-08T00:17:20.487846262Z" level=info msg="Ensure that sandbox db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150 in task-service has been cleanup successfully" Nov 8 00:17:20.495395 kubelet[2522]: I1108 00:17:20.493117 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:20.499985 containerd[1465]: time="2025-11-08T00:17:20.499942370Z" level=info msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" Nov 8 00:17:20.500892 containerd[1465]: time="2025-11-08T00:17:20.500859321Z" level=info msg="Ensure that sandbox f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd in task-service has been cleanup successfully" Nov 8 00:17:20.535850 kubelet[2522]: I1108 00:17:20.535047 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:20.547074 containerd[1465]: time="2025-11-08T00:17:20.547011278Z" level=info msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" Nov 8 00:17:20.559878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b-shm.mount: Deactivated successfully. Nov 8 00:17:20.560016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06-shm.mount: Deactivated successfully. Nov 8 00:17:20.583329 containerd[1465]: time="2025-11-08T00:17:20.582813053Z" level=error msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" failed" error="failed to destroy network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.583329 containerd[1465]: time="2025-11-08T00:17:20.582903823Z" level=error msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" failed" error="failed to destroy network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.583329 containerd[1465]: time="2025-11-08T00:17:20.582944349Z" level=error msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" failed" error="failed to destroy network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.583329 containerd[1465]: time="2025-11-08T00:17:20.583006937Z" level=error msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" failed" error="failed to destroy network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.583558 kubelet[2522]: E1108 00:17:20.583280 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:20.583558 kubelet[2522]: E1108 00:17:20.583348 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150"} Nov 8 00:17:20.583558 kubelet[2522]: E1108 00:17:20.583406 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"031c3221-a127-47c9-883a-8bd9e65d7753\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.583558 kubelet[2522]: E1108 00:17:20.583436 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"031c3221-a127-47c9-883a-8bd9e65d7753\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:20.583743 kubelet[2522]: E1108 00:17:20.583479 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:20.583743 kubelet[2522]: E1108 00:17:20.583501 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06"} Nov 8 00:17:20.583743 kubelet[2522]: E1108 00:17:20.583522 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.583743 kubelet[2522]: E1108 00:17:20.583538 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd5855f48-b5fpd" podUID="0ffc1fd0-1ed9-42f4-a06d-067b691194ce" Nov 8 00:17:20.583868 kubelet[2522]: E1108 00:17:20.583558 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:20.583868 kubelet[2522]: E1108 00:17:20.583593 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb"} Nov 8 00:17:20.583868 kubelet[2522]: E1108 00:17:20.583616 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.583868 kubelet[2522]: E1108 00:17:20.583643 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:20.583986 kubelet[2522]: E1108 00:17:20.583672 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:20.583986 kubelet[2522]: E1108 00:17:20.583693 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089"} Nov 8 00:17:20.583986 kubelet[2522]: E1108 00:17:20.583716 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a4ccd10-3267-4054-9505-eb9db275a87f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.583986 kubelet[2522]: E1108 00:17:20.583741 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a4ccd10-3267-4054-9505-eb9db275a87f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:17:20.588607 containerd[1465]: time="2025-11-08T00:17:20.588535387Z" level=error msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" failed" error="failed to destroy network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.588717 kubelet[2522]: E1108 00:17:20.588680 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:20.588717 kubelet[2522]: E1108 00:17:20.588705 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd"} Nov 8 00:17:20.588791 kubelet[2522]: E1108 00:17:20.588726 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"485c28c7-3ce9-4d8e-9396-e75393354e2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.588791 kubelet[2522]: E1108 00:17:20.588756 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"485c28c7-3ce9-4d8e-9396-e75393354e2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:20.592904 containerd[1465]: time="2025-11-08T00:17:20.592867382Z" level=info msg="Ensure that sandbox 251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8 in task-service has been cleanup successfully" Nov 8 00:17:20.593251 kubelet[2522]: I1108 00:17:20.593215 2522 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:20.594038 containerd[1465]: time="2025-11-08T00:17:20.593946296Z" level=info msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" Nov 8 00:17:20.594224 containerd[1465]: time="2025-11-08T00:17:20.594107308Z" level=info msg="Ensure that sandbox 5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b in task-service has been cleanup successfully" Nov 8 00:17:20.631496 containerd[1465]: time="2025-11-08T00:17:20.631399267Z" level=error msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" failed" error="failed to destroy network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.631839 kubelet[2522]: E1108 00:17:20.631775 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:20.631919 kubelet[2522]: E1108 00:17:20.631852 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8"} Nov 8 00:17:20.631919 kubelet[2522]: E1108 00:17:20.631903 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67db3da6-15c7-4668-a4a2-6c4899b5791f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.632042 kubelet[2522]: E1108 00:17:20.631940 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67db3da6-15c7-4668-a4a2-6c4899b5791f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kqptp" podUID="67db3da6-15c7-4668-a4a2-6c4899b5791f" Nov 8 00:17:20.636637 containerd[1465]: time="2025-11-08T00:17:20.636592668Z" level=error msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" failed" error="failed to destroy network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.636834 kubelet[2522]: E1108 00:17:20.636789 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:20.636888 kubelet[2522]: E1108 00:17:20.636837 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc"} Nov 8 00:17:20.636888 kubelet[2522]: E1108 00:17:20.636863 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89b15836-7628-4868-bd25-c2735fc5d488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.637017 kubelet[2522]: E1108 00:17:20.636887 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89b15836-7628-4868-bd25-c2735fc5d488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:20.642048 containerd[1465]: time="2025-11-08T00:17:20.641863755Z" level=error msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" failed" error="failed to destroy network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:17:20.642141 kubelet[2522]: E1108 00:17:20.641991 2522 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:20.642141 kubelet[2522]: E1108 00:17:20.642015 2522 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b"} Nov 8 00:17:20.642141 kubelet[2522]: E1108 00:17:20.642035 2522 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60f79ede-23ce-4941-9970-af1b19912c40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:17:20.642141 kubelet[2522]: E1108 00:17:20.642053 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60f79ede-23ce-4941-9970-af1b19912c40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sdjt5" podUID="60f79ede-23ce-4941-9970-af1b19912c40" Nov 8 00:17:27.014142 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:57874.service - OpenSSH per-connection server daemon (10.0.0.1:57874). Nov 8 00:17:27.058002 sshd[3741]: Accepted publickey for core from 10.0.0.1 port 57874 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:27.059964 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:27.065065 systemd-logind[1450]: New session 8 of user core. Nov 8 00:17:27.070693 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:17:27.231651 sshd[3741]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:27.239171 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:17:27.240173 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:57874.service: Deactivated successfully. Nov 8 00:17:27.242414 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:17:27.245513 systemd-logind[1450]: Removed session 8. Nov 8 00:17:28.539184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321249538.mount: Deactivated successfully. Nov 8 00:17:29.535369 containerd[1465]: time="2025-11-08T00:17:29.535290955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:29.537608 containerd[1465]: time="2025-11-08T00:17:29.537553198Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:29.538130 containerd[1465]: time="2025-11-08T00:17:29.538092910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:17:29.539797 containerd[1465]: time="2025-11-08T00:17:29.539757863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:17:29.540457 containerd[1465]: time="2025-11-08T00:17:29.540403093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.080246494s" Nov 8 00:17:29.540457 containerd[1465]: time="2025-11-08T00:17:29.540450362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:17:29.554796 containerd[1465]: time="2025-11-08T00:17:29.554740234Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:17:29.576283 containerd[1465]: time="2025-11-08T00:17:29.576235299Z" level=info msg="CreateContainer within sandbox \"0c504f3498f1e11bfd1d54a8b00b33340b8baf0e8cee445eb5c4520f93f3598e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"17d517b241f967b8c8a065384bb250aa7bfe92d804b0dd46ee8083bc4c370a28\"" Nov 8 00:17:29.576951 containerd[1465]: time="2025-11-08T00:17:29.576908020Z" level=info msg="StartContainer for \"17d517b241f967b8c8a065384bb250aa7bfe92d804b0dd46ee8083bc4c370a28\"" Nov 8 00:17:29.643791 systemd[1]: Started cri-containerd-17d517b241f967b8c8a065384bb250aa7bfe92d804b0dd46ee8083bc4c370a28.scope - libcontainer container 17d517b241f967b8c8a065384bb250aa7bfe92d804b0dd46ee8083bc4c370a28. Nov 8 00:17:29.846065 containerd[1465]: time="2025-11-08T00:17:29.845895382Z" level=info msg="StartContainer for \"17d517b241f967b8c8a065384bb250aa7bfe92d804b0dd46ee8083bc4c370a28\" returns successfully" Nov 8 00:17:29.931874 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:17:29.931995 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:17:30.037959 containerd[1465]: time="2025-11-08T00:17:30.037886463Z" level=info msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.136 [INFO][3820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.136 [INFO][3820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" iface="eth0" netns="/var/run/netns/cni-f9e13c72-3afb-b182-4480-64b90f7fe314" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.138 [INFO][3820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" iface="eth0" netns="/var/run/netns/cni-f9e13c72-3afb-b182-4480-64b90f7fe314" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.138 [INFO][3820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" iface="eth0" netns="/var/run/netns/cni-f9e13c72-3afb-b182-4480-64b90f7fe314" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.138 [INFO][3820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.138 [INFO][3820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.208 [INFO][3830] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.209 [INFO][3830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.209 [INFO][3830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.217 [WARNING][3830] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.217 [INFO][3830] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.220 [INFO][3830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:30.228220 containerd[1465]: 2025-11-08 00:17:30.224 [INFO][3820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:30.228742 containerd[1465]: time="2025-11-08T00:17:30.228437671Z" level=info msg="TearDown network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" successfully" Nov 8 00:17:30.228742 containerd[1465]: time="2025-11-08T00:17:30.228475101Z" level=info msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" returns successfully" Nov 8 00:17:30.320417 kubelet[2522]: I1108 00:17:30.320341 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-ca-bundle\") pod \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " Nov 8 00:17:30.320417 kubelet[2522]: I1108 00:17:30.320417 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-backend-key-pair\") pod \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " Nov 8 00:17:30.320417 kubelet[2522]: I1108 00:17:30.320442 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-867jh\" (UniqueName: \"kubernetes.io/projected/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-kube-api-access-867jh\") pod \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\" (UID: \"0ffc1fd0-1ed9-42f4-a06d-067b691194ce\") " Nov 8 00:17:30.321208 kubelet[2522]: I1108 00:17:30.321017 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0ffc1fd0-1ed9-42f4-a06d-067b691194ce" (UID: "0ffc1fd0-1ed9-42f4-a06d-067b691194ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:17:30.325282 kubelet[2522]: I1108 00:17:30.325230 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-kube-api-access-867jh" (OuterVolumeSpecName: "kube-api-access-867jh") pod "0ffc1fd0-1ed9-42f4-a06d-067b691194ce" (UID: "0ffc1fd0-1ed9-42f4-a06d-067b691194ce"). InnerVolumeSpecName "kube-api-access-867jh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:17:30.325282 kubelet[2522]: I1108 00:17:30.325278 2522 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0ffc1fd0-1ed9-42f4-a06d-067b691194ce" (UID: "0ffc1fd0-1ed9-42f4-a06d-067b691194ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:17:30.421093 kubelet[2522]: I1108 00:17:30.421034 2522 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:30.421093 kubelet[2522]: I1108 00:17:30.421083 2522 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:30.421187 kubelet[2522]: I1108 00:17:30.421101 2522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-867jh\" (UniqueName: \"kubernetes.io/projected/0ffc1fd0-1ed9-42f4-a06d-067b691194ce-kube-api-access-867jh\") on node \"localhost\" DevicePath \"\"" Nov 8 00:17:30.547476 systemd[1]: run-netns-cni\x2df9e13c72\x2d3afb\x2db182\x2d4480\x2d64b90f7fe314.mount: Deactivated successfully. Nov 8 00:17:30.547624 systemd[1]: var-lib-kubelet-pods-0ffc1fd0\x2d1ed9\x2d42f4\x2da06d\x2d067b691194ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d867jh.mount: Deactivated successfully. Nov 8 00:17:30.547718 systemd[1]: var-lib-kubelet-pods-0ffc1fd0\x2d1ed9\x2d42f4\x2da06d\x2d067b691194ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:17:30.625610 kubelet[2522]: E1108 00:17:30.622730 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:30.633068 systemd[1]: Removed slice kubepods-besteffort-pod0ffc1fd0_1ed9_42f4_a06d_067b691194ce.slice - libcontainer container kubepods-besteffort-pod0ffc1fd0_1ed9_42f4_a06d_067b691194ce.slice. Nov 8 00:17:30.656302 kubelet[2522]: I1108 00:17:30.654850 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lpljc" podStartSLOduration=2.455706712 podStartE2EDuration="23.654819721s" podCreationTimestamp="2025-11-08 00:17:07 +0000 UTC" firstStartedPulling="2025-11-08 00:17:08.342127605 +0000 UTC m=+19.112818427" lastFinishedPulling="2025-11-08 00:17:29.541240604 +0000 UTC m=+40.311931436" observedRunningTime="2025-11-08 00:17:30.643549764 +0000 UTC m=+41.414240596" watchObservedRunningTime="2025-11-08 00:17:30.654819721 +0000 UTC m=+41.425510543" Nov 8 00:17:30.862011 systemd[1]: Created slice kubepods-besteffort-pod53aaf5c5_a07c_4b5d_8085_2d2b30008c52.slice - libcontainer container kubepods-besteffort-pod53aaf5c5_a07c_4b5d_8085_2d2b30008c52.slice. Nov 8 00:17:30.925483 kubelet[2522]: I1108 00:17:30.925430 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53aaf5c5-a07c-4b5d-8085-2d2b30008c52-whisker-backend-key-pair\") pod \"whisker-86b79d8578-fxtxr\" (UID: \"53aaf5c5-a07c-4b5d-8085-2d2b30008c52\") " pod="calico-system/whisker-86b79d8578-fxtxr" Nov 8 00:17:30.925483 kubelet[2522]: I1108 00:17:30.925479 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gscnr\" (UniqueName: \"kubernetes.io/projected/53aaf5c5-a07c-4b5d-8085-2d2b30008c52-kube-api-access-gscnr\") pod \"whisker-86b79d8578-fxtxr\" (UID: \"53aaf5c5-a07c-4b5d-8085-2d2b30008c52\") " pod="calico-system/whisker-86b79d8578-fxtxr" Nov 8 00:17:30.925483 kubelet[2522]: I1108 00:17:30.925507 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53aaf5c5-a07c-4b5d-8085-2d2b30008c52-whisker-ca-bundle\") pod \"whisker-86b79d8578-fxtxr\" (UID: \"53aaf5c5-a07c-4b5d-8085-2d2b30008c52\") " pod="calico-system/whisker-86b79d8578-fxtxr" Nov 8 00:17:31.167005 containerd[1465]: time="2025-11-08T00:17:31.166848093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b79d8578-fxtxr,Uid:53aaf5c5-a07c-4b5d-8085-2d2b30008c52,Namespace:calico-system,Attempt:0,}" Nov 8 00:17:31.294912 systemd-networkd[1401]: cali601bdec9ffe: Link UP Nov 8 00:17:31.295856 systemd-networkd[1401]: cali601bdec9ffe: Gained carrier Nov 8 00:17:31.366907 containerd[1465]: time="2025-11-08T00:17:31.366837260Z" level=info msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" Nov 8 00:17:31.367890 containerd[1465]: time="2025-11-08T00:17:31.367452003Z" level=info msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" Nov 8 00:17:31.370161 kubelet[2522]: I1108 00:17:31.370109 2522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffc1fd0-1ed9-42f4-a06d-067b691194ce" path="/var/lib/kubelet/pods/0ffc1fd0-1ed9-42f4-a06d-067b691194ce/volumes" Nov 8 00:17:31.623498 kubelet[2522]: E1108 00:17:31.623456 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.207 [INFO][3875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.219 [INFO][3875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86b79d8578--fxtxr-eth0 whisker-86b79d8578- calico-system 53aaf5c5-a07c-4b5d-8085-2d2b30008c52 1021 0 2025-11-08 00:17:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86b79d8578 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86b79d8578-fxtxr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali601bdec9ffe [] [] }} ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.219 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.248 [INFO][3889] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" HandleID="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Workload="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.249 [INFO][3889] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" HandleID="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Workload="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000367650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86b79d8578-fxtxr", "timestamp":"2025-11-08 00:17:31.248917908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.249 [INFO][3889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.249 [INFO][3889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.249 [INFO][3889] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.256 [INFO][3889] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.263 [INFO][3889] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.268 [INFO][3889] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.270 [INFO][3889] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.272 [INFO][3889] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.272 [INFO][3889] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.273 [INFO][3889] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2 Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.277 [INFO][3889] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.281 [INFO][3889] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.281 [INFO][3889] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" host="localhost" Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.281 [INFO][3889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:31.694614 containerd[1465]: 2025-11-08 00:17:31.281 [INFO][3889] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" HandleID="k8s-pod-network.9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Workload="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.286 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b79d8578--fxtxr-eth0", GenerateName:"whisker-86b79d8578-", Namespace:"calico-system", SelfLink:"", UID:"53aaf5c5-a07c-4b5d-8085-2d2b30008c52", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b79d8578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86b79d8578-fxtxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali601bdec9ffe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.286 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.286 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali601bdec9ffe ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.295 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.296 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b79d8578--fxtxr-eth0", GenerateName:"whisker-86b79d8578-", Namespace:"calico-system", SelfLink:"", UID:"53aaf5c5-a07c-4b5d-8085-2d2b30008c52", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b79d8578", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2", Pod:"whisker-86b79d8578-fxtxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali601bdec9ffe", MAC:"4e:d2:0d:ad:e0:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:31.695467 containerd[1465]: 2025-11-08 00:17:31.682 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2" Namespace="calico-system" Pod="whisker-86b79d8578-fxtxr" WorkloadEndpoint="localhost-k8s-whisker--86b79d8578--fxtxr-eth0" Nov 8 00:17:31.839564 containerd[1465]: time="2025-11-08T00:17:31.838629853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:31.839564 containerd[1465]: time="2025-11-08T00:17:31.839526636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:31.839564 containerd[1465]: time="2025-11-08T00:17:31.839549368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:31.839814 containerd[1465]: time="2025-11-08T00:17:31.839684251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:31.869785 systemd[1]: Started cri-containerd-9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2.scope - libcontainer container 9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2. Nov 8 00:17:31.889639 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.854 [INFO][4011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.855 [INFO][4011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" iface="eth0" netns="/var/run/netns/cni-685f69c7-41d1-5e40-976b-f24d220f7215" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.857 [INFO][4011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" iface="eth0" netns="/var/run/netns/cni-685f69c7-41d1-5e40-976b-f24d220f7215" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.858 [INFO][4011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" iface="eth0" netns="/var/run/netns/cni-685f69c7-41d1-5e40-976b-f24d220f7215" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.858 [INFO][4011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.858 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.888 [INFO][4094] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.888 [INFO][4094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.888 [INFO][4094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.898 [WARNING][4094] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.898 [INFO][4094] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.900 [INFO][4094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:31.909202 containerd[1465]: 2025-11-08 00:17:31.904 [INFO][4011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:31.910793 containerd[1465]: time="2025-11-08T00:17:31.910752672Z" level=info msg="TearDown network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" successfully" Nov 8 00:17:31.910854 containerd[1465]: time="2025-11-08T00:17:31.910793118Z" level=info msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" returns successfully" Nov 8 00:17:31.911204 kubelet[2522]: E1108 00:17:31.911177 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:31.913162 systemd[1]: run-netns-cni\x2d685f69c7\x2d41d1\x2d5e40\x2d976b\x2df24d220f7215.mount: Deactivated successfully. Nov 8 00:17:31.914062 containerd[1465]: time="2025-11-08T00:17:31.913890337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdjt5,Uid:60f79ede-23ce-4941-9970-af1b19912c40,Namespace:kube-system,Attempt:1,}" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" iface="eth0" netns="/var/run/netns/cni-3ec90639-6aaf-5e75-6d9f-8c1472d51011" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" iface="eth0" netns="/var/run/netns/cni-3ec90639-6aaf-5e75-6d9f-8c1472d51011" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" iface="eth0" netns="/var/run/netns/cni-3ec90639-6aaf-5e75-6d9f-8c1472d51011" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.860 [INFO][4012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.901 [INFO][4100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.901 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.901 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.910 [WARNING][4100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.910 [INFO][4100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.913 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:31.922029 containerd[1465]: 2025-11-08 00:17:31.918 [INFO][4012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:31.922394 containerd[1465]: time="2025-11-08T00:17:31.922227834Z" level=info msg="TearDown network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" successfully" Nov 8 00:17:31.922394 containerd[1465]: time="2025-11-08T00:17:31.922256187Z" level=info msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" returns successfully" Nov 8 00:17:31.922750 kubelet[2522]: E1108 00:17:31.922672 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:31.923110 containerd[1465]: time="2025-11-08T00:17:31.923088339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqptp,Uid:67db3da6-15c7-4668-a4a2-6c4899b5791f,Namespace:kube-system,Attempt:1,}" Nov 8 00:17:31.925892 containerd[1465]: time="2025-11-08T00:17:31.925840871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b79d8578-fxtxr,Uid:53aaf5c5-a07c-4b5d-8085-2d2b30008c52,Namespace:calico-system,Attempt:0,} returns sandbox id \"9206160930dbe5f717184fcd4649722c1d9c5613adae9b35c1f6ae78d6ce86b2\"" Nov 8 00:17:31.927690 containerd[1465]: time="2025-11-08T00:17:31.927531902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:17:32.243969 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Nov 8 00:17:32.307114 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:32.309117 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:32.314239 systemd-logind[1450]: New session 9 of user core. Nov 8 00:17:32.319722 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:17:32.365015 containerd[1465]: time="2025-11-08T00:17:32.364959740Z" level=info msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" Nov 8 00:17:32.367992 systemd-networkd[1401]: cali601bdec9ffe: Gained IPv6LL Nov 8 00:17:32.479345 sshd[4125]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:32.484824 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:57888.service: Deactivated successfully. Nov 8 00:17:32.487342 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:17:32.488041 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:17:32.489308 systemd-logind[1450]: Removed session 9. Nov 8 00:17:32.547977 systemd[1]: run-netns-cni\x2d3ec90639\x2d6aaf\x2d5e75\x2d6d9f\x2d8c1472d51011.mount: Deactivated successfully. Nov 8 00:17:32.616679 containerd[1465]: time="2025-11-08T00:17:32.616588977Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:32.627688 kubelet[2522]: E1108 00:17:32.627529 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.596 [INFO][4145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.596 [INFO][4145] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" iface="eth0" netns="/var/run/netns/cni-0cd58c85-64ec-7a9d-e214-d90a8aee0dd1" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.597 [INFO][4145] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" iface="eth0" netns="/var/run/netns/cni-0cd58c85-64ec-7a9d-e214-d90a8aee0dd1" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.597 [INFO][4145] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" iface="eth0" netns="/var/run/netns/cni-0cd58c85-64ec-7a9d-e214-d90a8aee0dd1" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.597 [INFO][4145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.597 [INFO][4145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.624 [INFO][4161] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.624 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.624 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.631 [WARNING][4161] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.631 [INFO][4161] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.634 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:32.640338 containerd[1465]: 2025-11-08 00:17:32.637 [INFO][4145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:32.642736 containerd[1465]: time="2025-11-08T00:17:32.642685762Z" level=info msg="TearDown network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" successfully" Nov 8 00:17:32.642736 containerd[1465]: time="2025-11-08T00:17:32.642733451Z" level=info msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" returns successfully" Nov 8 00:17:32.643783 containerd[1465]: time="2025-11-08T00:17:32.643617219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdtmc,Uid:485c28c7-3ce9-4d8e-9396-e75393354e2f,Namespace:calico-system,Attempt:1,}" Nov 8 00:17:32.644436 systemd[1]: run-netns-cni\x2d0cd58c85\x2d64ec\x2d7a9d\x2de214\x2dd90a8aee0dd1.mount: Deactivated successfully. Nov 8 00:17:32.723031 containerd[1465]: time="2025-11-08T00:17:32.716093280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:17:32.723306 containerd[1465]: time="2025-11-08T00:17:32.716171296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:17:32.723630 kubelet[2522]: E1108 00:17:32.723561 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:32.723683 kubelet[2522]: E1108 00:17:32.723650 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:32.723962 kubelet[2522]: E1108 00:17:32.723919 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:074ed4dfb7104ec888d28786a94cff0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:32.727563 containerd[1465]: time="2025-11-08T00:17:32.727529619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:17:33.430307 containerd[1465]: time="2025-11-08T00:17:33.430249503Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:33.461145 containerd[1465]: time="2025-11-08T00:17:33.460912311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:17:33.461564 containerd[1465]: time="2025-11-08T00:17:33.461489834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:17:33.462310 kubelet[2522]: E1108 00:17:33.462258 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:33.462628 kubelet[2522]: E1108 00:17:33.462326 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:33.464930 kubelet[2522]: E1108 00:17:33.464830 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:33.466107 kubelet[2522]: E1108 00:17:33.466038 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b79d8578-fxtxr" podUID="53aaf5c5-a07c-4b5d-8085-2d2b30008c52" Nov 8 00:17:33.476923 systemd-networkd[1401]: cali484c2ec8182: Link UP Nov 8 00:17:33.477848 systemd-networkd[1401]: cali484c2ec8182: Gained carrier Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.291 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.302 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0 coredns-674b8bbfcf- kube-system 60f79ede-23ce-4941-9970-af1b19912c40 1036 0 2025-11-08 00:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-sdjt5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali484c2ec8182 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.302 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.363 [INFO][4228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" HandleID="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.363 [INFO][4228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" HandleID="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c63f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-sdjt5", "timestamp":"2025-11-08 00:17:33.363030121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.363 [INFO][4228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.363 [INFO][4228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.363 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.370 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.374 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.378 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.380 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.382 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.382 [INFO][4228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.383 [INFO][4228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884 Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.393 [INFO][4228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.465 [INFO][4228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.465 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" host="localhost" Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.465 [INFO][4228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:33.506863 containerd[1465]: 2025-11-08 00:17:33.466 [INFO][4228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" HandleID="k8s-pod-network.b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.471 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60f79ede-23ce-4941-9970-af1b19912c40", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-sdjt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484c2ec8182", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.472 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.472 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali484c2ec8182 ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.476 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.477 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60f79ede-23ce-4941-9970-af1b19912c40", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884", Pod:"coredns-674b8bbfcf-sdjt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484c2ec8182", MAC:"26:93:45:1d:d6:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.507607 containerd[1465]: 2025-11-08 00:17:33.501 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdjt5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:33.549609 containerd[1465]: time="2025-11-08T00:17:33.549225304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:33.549609 containerd[1465]: time="2025-11-08T00:17:33.549307688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:33.549609 containerd[1465]: time="2025-11-08T00:17:33.549324910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.549609 containerd[1465]: time="2025-11-08T00:17:33.549474491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.582816 systemd[1]: Started cri-containerd-b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884.scope - libcontainer container b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884. Nov 8 00:17:33.605622 systemd-networkd[1401]: calicd203c5bd52: Link UP Nov 8 00:17:33.606031 systemd-networkd[1401]: calicd203c5bd52: Gained carrier Nov 8 00:17:33.607024 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.473 [INFO][4237] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.495 [INFO][4237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--kqptp-eth0 coredns-674b8bbfcf- kube-system 67db3da6-15c7-4668-a4a2-6c4899b5791f 1035 0 2025-11-08 00:16:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-kqptp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd203c5bd52 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.495 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.529 [INFO][4265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" HandleID="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.530 [INFO][4265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" HandleID="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-kqptp", "timestamp":"2025-11-08 00:17:33.529293882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.530 [INFO][4265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.530 [INFO][4265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.530 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.539 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.554 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.572 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.575 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.578 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.578 [INFO][4265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.580 [INFO][4265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9 Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.588 [INFO][4265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" host="localhost" Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:33.625981 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" HandleID="k8s-pod-network.443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.601 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kqptp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67db3da6-15c7-4668-a4a2-6c4899b5791f", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-kqptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd203c5bd52", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.601 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.601 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd203c5bd52 ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.610 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.613 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kqptp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67db3da6-15c7-4668-a4a2-6c4899b5791f", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9", Pod:"coredns-674b8bbfcf-kqptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd203c5bd52", MAC:"56:d6:a0:dc:41:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.627026 containerd[1465]: 2025-11-08 00:17:33.622 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kqptp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:33.631711 kubelet[2522]: E1108 00:17:33.631604 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b79d8578-fxtxr" podUID="53aaf5c5-a07c-4b5d-8085-2d2b30008c52" Nov 8 00:17:33.643130 containerd[1465]: time="2025-11-08T00:17:33.642791739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdjt5,Uid:60f79ede-23ce-4941-9970-af1b19912c40,Namespace:kube-system,Attempt:1,} returns sandbox id \"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884\"" Nov 8 00:17:33.647027 kubelet[2522]: E1108 00:17:33.646643 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:33.664279 containerd[1465]: time="2025-11-08T00:17:33.664161929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:33.664279 containerd[1465]: time="2025-11-08T00:17:33.664258510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:33.664279 containerd[1465]: time="2025-11-08T00:17:33.664281082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.664637 containerd[1465]: time="2025-11-08T00:17:33.664413000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.665592 containerd[1465]: time="2025-11-08T00:17:33.665530255Z" level=info msg="CreateContainer within sandbox \"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:17:33.698903 containerd[1465]: time="2025-11-08T00:17:33.698758174Z" level=info msg="CreateContainer within sandbox \"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d83e6e14dad49bd9fa8be18a8c33a2a3cdf72ac64881863ba23dd51950dd479c\"" Nov 8 00:17:33.699660 containerd[1465]: time="2025-11-08T00:17:33.699610523Z" level=info msg="StartContainer for \"d83e6e14dad49bd9fa8be18a8c33a2a3cdf72ac64881863ba23dd51950dd479c\"" Nov 8 00:17:33.699817 systemd[1]: Started cri-containerd-443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9.scope - libcontainer container 443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9. Nov 8 00:17:33.707400 systemd-networkd[1401]: cali18dda85ea61: Link UP Nov 8 00:17:33.708451 systemd-networkd[1401]: cali18dda85ea61: Gained carrier Nov 8 00:17:33.722185 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.508 [INFO][4250] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.524 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vdtmc-eth0 csi-node-driver- calico-system 485c28c7-3ce9-4d8e-9396-e75393354e2f 1043 0 2025-11-08 00:17:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vdtmc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18dda85ea61 [] [] }} ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.524 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.566 [INFO][4287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" HandleID="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.566 [INFO][4287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" HandleID="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vdtmc", "timestamp":"2025-11-08 00:17:33.566100352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.566 [INFO][4287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.595 [INFO][4287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.640 [INFO][4287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.655 [INFO][4287] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.676 [INFO][4287] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.678 [INFO][4287] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.680 [INFO][4287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.680 [INFO][4287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.682 [INFO][4287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382 Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.688 [INFO][4287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.696 [INFO][4287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.696 [INFO][4287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" host="localhost" Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.697 [INFO][4287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:33.726272 containerd[1465]: 2025-11-08 00:17:33.697 [INFO][4287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" HandleID="k8s-pod-network.1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.702 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vdtmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485c28c7-3ce9-4d8e-9396-e75393354e2f", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vdtmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dda85ea61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.702 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.702 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18dda85ea61 ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.709 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.709 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vdtmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485c28c7-3ce9-4d8e-9396-e75393354e2f", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382", Pod:"csi-node-driver-vdtmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dda85ea61", MAC:"ea:a8:d3:3c:9e:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:33.727591 containerd[1465]: 2025-11-08 00:17:33.721 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382" Namespace="calico-system" Pod="csi-node-driver-vdtmc" WorkloadEndpoint="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:33.742981 systemd[1]: Started cri-containerd-d83e6e14dad49bd9fa8be18a8c33a2a3cdf72ac64881863ba23dd51950dd479c.scope - libcontainer container d83e6e14dad49bd9fa8be18a8c33a2a3cdf72ac64881863ba23dd51950dd479c. Nov 8 00:17:33.759320 containerd[1465]: time="2025-11-08T00:17:33.759160791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:33.759320 containerd[1465]: time="2025-11-08T00:17:33.759225181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:33.759685 containerd[1465]: time="2025-11-08T00:17:33.759239729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.761291 containerd[1465]: time="2025-11-08T00:17:33.761209132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:33.769783 containerd[1465]: time="2025-11-08T00:17:33.769711649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kqptp,Uid:67db3da6-15c7-4668-a4a2-6c4899b5791f,Namespace:kube-system,Attempt:1,} returns sandbox id \"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9\"" Nov 8 00:17:33.771212 kubelet[2522]: E1108 00:17:33.771176 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:33.786753 systemd[1]: Started cri-containerd-1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382.scope - libcontainer container 1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382. Nov 8 00:17:33.799989 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:33.901243 kubelet[2522]: I1108 00:17:33.900790 2522 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:17:33.901243 kubelet[2522]: E1108 00:17:33.901206 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:33.901523 containerd[1465]: time="2025-11-08T00:17:33.901080332Z" level=info msg="CreateContainer within sandbox \"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:17:34.028410 containerd[1465]: time="2025-11-08T00:17:34.028358795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vdtmc,Uid:485c28c7-3ce9-4d8e-9396-e75393354e2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382\"" Nov 8 00:17:34.028839 containerd[1465]: time="2025-11-08T00:17:34.028374985Z" level=info msg="StartContainer for \"d83e6e14dad49bd9fa8be18a8c33a2a3cdf72ac64881863ba23dd51950dd479c\" returns successfully" Nov 8 00:17:34.029873 containerd[1465]: time="2025-11-08T00:17:34.029841295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:17:34.069846 containerd[1465]: time="2025-11-08T00:17:34.069786022Z" level=info msg="CreateContainer within sandbox \"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"520dbab6d41f7ad17105c809ff1f156e037788468d6334b505260d2da06300d2\"" Nov 8 00:17:34.071481 containerd[1465]: time="2025-11-08T00:17:34.071435525Z" level=info msg="StartContainer for \"520dbab6d41f7ad17105c809ff1f156e037788468d6334b505260d2da06300d2\"" Nov 8 00:17:34.116837 systemd[1]: Started cri-containerd-520dbab6d41f7ad17105c809ff1f156e037788468d6334b505260d2da06300d2.scope - libcontainer container 520dbab6d41f7ad17105c809ff1f156e037788468d6334b505260d2da06300d2. Nov 8 00:17:34.155753 containerd[1465]: time="2025-11-08T00:17:34.155702338Z" level=info msg="StartContainer for \"520dbab6d41f7ad17105c809ff1f156e037788468d6334b505260d2da06300d2\" returns successfully" Nov 8 00:17:34.219609 kernel: bpftool[4563]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:17:34.366142 containerd[1465]: time="2025-11-08T00:17:34.365788346Z" level=info msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" Nov 8 00:17:34.366142 containerd[1465]: time="2025-11-08T00:17:34.365963163Z" level=info msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" Nov 8 00:17:34.367188 containerd[1465]: time="2025-11-08T00:17:34.366654109Z" level=info msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" Nov 8 00:17:34.367188 containerd[1465]: time="2025-11-08T00:17:34.365963494Z" level=info msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" Nov 8 00:17:34.419583 containerd[1465]: time="2025-11-08T00:17:34.419434038Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:34.424081 containerd[1465]: time="2025-11-08T00:17:34.423525462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:17:34.424081 containerd[1465]: time="2025-11-08T00:17:34.423661858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:17:34.424274 kubelet[2522]: E1108 00:17:34.423916 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:17:34.424274 kubelet[2522]: E1108 00:17:34.423974 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:17:34.425618 kubelet[2522]: E1108 00:17:34.424627 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:34.428626 containerd[1465]: time="2025-11-08T00:17:34.426564141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:17:34.550648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984065434.mount: Deactivated successfully. Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.470 [INFO][4626] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.470 [INFO][4626] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" iface="eth0" netns="/var/run/netns/cni-55ffbcc0-5c27-0877-70bd-c10bcc07d5e9" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.470 [INFO][4626] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" iface="eth0" netns="/var/run/netns/cni-55ffbcc0-5c27-0877-70bd-c10bcc07d5e9" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.470 [INFO][4626] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" iface="eth0" netns="/var/run/netns/cni-55ffbcc0-5c27-0877-70bd-c10bcc07d5e9" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.471 [INFO][4626] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.471 [INFO][4626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.513 [INFO][4654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.513 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.513 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.528 [WARNING][4654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.528 [INFO][4654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.533 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.553835 containerd[1465]: 2025-11-08 00:17:34.542 [INFO][4626] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:34.555230 containerd[1465]: time="2025-11-08T00:17:34.553951857Z" level=info msg="TearDown network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" successfully" Nov 8 00:17:34.555230 containerd[1465]: time="2025-11-08T00:17:34.553987314Z" level=info msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" returns successfully" Nov 8 00:17:34.558017 systemd[1]: run-netns-cni\x2d55ffbcc0\x2d5c27\x2d0877\x2d70bd\x2dc10bcc07d5e9.mount: Deactivated successfully. Nov 8 00:17:34.559275 containerd[1465]: time="2025-11-08T00:17:34.559238603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2fwvz,Uid:7f4fec81-40b7-4cbb-9ed0-146aff61e0a7,Namespace:calico-system,Attempt:1,}" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.475 [INFO][4614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.475 [INFO][4614] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" iface="eth0" netns="/var/run/netns/cni-f75f8c4e-9b1e-5512-2483-969fdfef7b1d" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.476 [INFO][4614] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" iface="eth0" netns="/var/run/netns/cni-f75f8c4e-9b1e-5512-2483-969fdfef7b1d" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.482 [INFO][4614] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" iface="eth0" netns="/var/run/netns/cni-f75f8c4e-9b1e-5512-2483-969fdfef7b1d" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.483 [INFO][4614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.483 [INFO][4614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.521 [INFO][4669] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.522 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.533 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.543 [WARNING][4669] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.543 [INFO][4669] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.545 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.569606 containerd[1465]: 2025-11-08 00:17:34.555 [INFO][4614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:34.569606 containerd[1465]: time="2025-11-08T00:17:34.567717555Z" level=info msg="TearDown network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" successfully" Nov 8 00:17:34.569606 containerd[1465]: time="2025-11-08T00:17:34.567747842Z" level=info msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" returns successfully" Nov 8 00:17:34.571104 systemd[1]: run-netns-cni\x2df75f8c4e\x2d9b1e\x2d5512\x2d2483\x2d969fdfef7b1d.mount: Deactivated successfully. Nov 8 00:17:34.571807 containerd[1465]: time="2025-11-08T00:17:34.571781506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7468b9b5-s94kb,Uid:0a4ccd10-3267-4054-9505-eb9db275a87f,Namespace:calico-system,Attempt:1,}" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.451 [INFO][4625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.451 [INFO][4625] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" iface="eth0" netns="/var/run/netns/cni-cadfd72b-f325-6830-408f-b4540d407be2" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.453 [INFO][4625] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" iface="eth0" netns="/var/run/netns/cni-cadfd72b-f325-6830-408f-b4540d407be2" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.460 [INFO][4625] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" iface="eth0" netns="/var/run/netns/cni-cadfd72b-f325-6830-408f-b4540d407be2" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.460 [INFO][4625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.460 [INFO][4625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.524 [INFO][4649] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.526 [INFO][4649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.545 [INFO][4649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.552 [WARNING][4649] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.553 [INFO][4649] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.555 [INFO][4649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.572996 containerd[1465]: 2025-11-08 00:17:34.570 [INFO][4625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:34.573704 containerd[1465]: time="2025-11-08T00:17:34.573565062Z" level=info msg="TearDown network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" successfully" Nov 8 00:17:34.574403 containerd[1465]: time="2025-11-08T00:17:34.574378908Z" level=info msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" returns successfully" Nov 8 00:17:34.575065 containerd[1465]: time="2025-11-08T00:17:34.575044797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-456nr,Uid:89b15836-7628-4868-bd25-c2735fc5d488,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:17:34.576390 systemd[1]: run-netns-cni\x2dcadfd72b\x2df325\x2d6830\x2d408f\x2db4540d407be2.mount: Deactivated successfully. Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.467 [INFO][4607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.467 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" iface="eth0" netns="/var/run/netns/cni-11cb1870-d2fe-0585-d6a1-4d66787b0e80" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.467 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" iface="eth0" netns="/var/run/netns/cni-11cb1870-d2fe-0585-d6a1-4d66787b0e80" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.468 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" iface="eth0" netns="/var/run/netns/cni-11cb1870-d2fe-0585-d6a1-4d66787b0e80" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.468 [INFO][4607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.468 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.541 [INFO][4652] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.541 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.557 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.566 [WARNING][4652] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.566 [INFO][4652] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.575 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.596011 containerd[1465]: 2025-11-08 00:17:34.589 [INFO][4607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:34.597928 containerd[1465]: time="2025-11-08T00:17:34.596777026Z" level=info msg="TearDown network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" successfully" Nov 8 00:17:34.597928 containerd[1465]: time="2025-11-08T00:17:34.596808946Z" level=info msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" returns successfully" Nov 8 00:17:34.599026 containerd[1465]: time="2025-11-08T00:17:34.599002068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-lzmrt,Uid:031c3221-a127-47c9-883a-8bd9e65d7753,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:17:34.624229 systemd-networkd[1401]: vxlan.calico: Link UP Nov 8 00:17:34.624245 systemd-networkd[1401]: vxlan.calico: Gained carrier Nov 8 00:17:34.650053 kubelet[2522]: E1108 00:17:34.650008 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:34.667402 kubelet[2522]: E1108 00:17:34.667342 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:34.668044 kubelet[2522]: E1108 00:17:34.668011 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:34.677224 kubelet[2522]: I1108 00:17:34.676483 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kqptp" podStartSLOduration=39.676460994 podStartE2EDuration="39.676460994s" podCreationTimestamp="2025-11-08 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:34.676021299 +0000 UTC m=+45.446712121" watchObservedRunningTime="2025-11-08 00:17:34.676460994 +0000 UTC m=+45.447151816" Nov 8 00:17:34.762077 kubelet[2522]: I1108 00:17:34.761806 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sdjt5" podStartSLOduration=39.761784458 podStartE2EDuration="39.761784458s" podCreationTimestamp="2025-11-08 00:16:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:17:34.739832948 +0000 UTC m=+45.510523770" watchObservedRunningTime="2025-11-08 00:17:34.761784458 +0000 UTC m=+45.532475270" Nov 8 00:17:34.810564 containerd[1465]: time="2025-11-08T00:17:34.810496017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:34.816987 systemd-networkd[1401]: cali77e9f58a3f3: Link UP Nov 8 00:17:34.817729 containerd[1465]: time="2025-11-08T00:17:34.817306490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:17:34.817729 containerd[1465]: time="2025-11-08T00:17:34.817504391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:17:34.818001 systemd-networkd[1401]: cali77e9f58a3f3: Gained carrier Nov 8 00:17:34.818802 kubelet[2522]: E1108 00:17:34.818677 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:17:34.818802 kubelet[2522]: E1108 00:17:34.818781 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:17:34.820662 kubelet[2522]: E1108 00:17:34.819316 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:34.820662 kubelet[2522]: E1108 00:17:34.820608 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.655 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--2fwvz-eth0 goldmane-666569f655- calico-system 7f4fec81-40b7-4cbb-9ed0-146aff61e0a7 1103 0 2025-11-08 00:17:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-2fwvz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali77e9f58a3f3 [] [] }} ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.655 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.740 [INFO][4763] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" HandleID="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.740 [INFO][4763] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" HandleID="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-2fwvz", "timestamp":"2025-11-08 00:17:34.740291397 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.741 [INFO][4763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.741 [INFO][4763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.741 [INFO][4763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.762 [INFO][4763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.775 [INFO][4763] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.786 [INFO][4763] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.789 [INFO][4763] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.793 [INFO][4763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.793 [INFO][4763] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.795 [INFO][4763] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82 Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.799 [INFO][4763] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.809 [INFO][4763] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.809 [INFO][4763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" host="localhost" Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.809 [INFO][4763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.836714 containerd[1465]: 2025-11-08 00:17:34.809 [INFO][4763] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" HandleID="k8s-pod-network.17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.814 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2fwvz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-2fwvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77e9f58a3f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.814 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.815 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77e9f58a3f3 ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.817 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.818 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2fwvz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82", Pod:"goldmane-666569f655-2fwvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77e9f58a3f3", MAC:"22:dd:c1:52:47:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:34.837365 containerd[1465]: 2025-11-08 00:17:34.833 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82" Namespace="calico-system" Pod="goldmane-666569f655-2fwvz" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:34.863090 containerd[1465]: time="2025-11-08T00:17:34.862671061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:34.863090 containerd[1465]: time="2025-11-08T00:17:34.862746021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:34.863090 containerd[1465]: time="2025-11-08T00:17:34.862760619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:34.863090 containerd[1465]: time="2025-11-08T00:17:34.862856669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:34.886779 systemd[1]: Started cri-containerd-17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82.scope - libcontainer container 17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82. Nov 8 00:17:34.915058 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:34.927232 systemd-networkd[1401]: cali484c2ec8182: Gained IPv6LL Nov 8 00:17:34.927557 systemd-networkd[1401]: cali8c4eb3a2af9: Link UP Nov 8 00:17:34.933380 systemd-networkd[1401]: cali8c4eb3a2af9: Gained carrier Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.731 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0 calico-kube-controllers-5f7468b9b5- calico-system 0a4ccd10-3267-4054-9505-eb9db275a87f 1104 0 2025-11-08 00:17:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f7468b9b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f7468b9b5-s94kb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c4eb3a2af9 [] [] }} ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.731 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.809 [INFO][4791] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" HandleID="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.810 [INFO][4791] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" HandleID="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f7468b9b5-s94kb", "timestamp":"2025-11-08 00:17:34.809945414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.810 [INFO][4791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.810 [INFO][4791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.810 [INFO][4791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.857 [INFO][4791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.872 [INFO][4791] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.883 [INFO][4791] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.886 [INFO][4791] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.888 [INFO][4791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.888 [INFO][4791] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.890 [INFO][4791] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.896 [INFO][4791] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.904 [INFO][4791] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.904 [INFO][4791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" host="localhost" Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.904 [INFO][4791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:34.945706 containerd[1465]: 2025-11-08 00:17:34.904 [INFO][4791] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" HandleID="k8s-pod-network.41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.912 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0", GenerateName:"calico-kube-controllers-5f7468b9b5-", Namespace:"calico-system", SelfLink:"", UID:"0a4ccd10-3267-4054-9505-eb9db275a87f", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7468b9b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f7468b9b5-s94kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c4eb3a2af9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.913 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.913 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c4eb3a2af9 ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.932 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.932 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0", GenerateName:"calico-kube-controllers-5f7468b9b5-", Namespace:"calico-system", SelfLink:"", UID:"0a4ccd10-3267-4054-9505-eb9db275a87f", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7468b9b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c", Pod:"calico-kube-controllers-5f7468b9b5-s94kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c4eb3a2af9", MAC:"8e:d7:5c:19:e5:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:34.947122 containerd[1465]: 2025-11-08 00:17:34.941 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c" Namespace="calico-system" Pod="calico-kube-controllers-5f7468b9b5-s94kb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:34.992673 containerd[1465]: time="2025-11-08T00:17:34.989589257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:34.992673 containerd[1465]: time="2025-11-08T00:17:34.989690176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:34.992673 containerd[1465]: time="2025-11-08T00:17:34.989705274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:34.992673 containerd[1465]: time="2025-11-08T00:17:34.989837362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:35.002402 containerd[1465]: time="2025-11-08T00:17:35.002330573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2fwvz,Uid:7f4fec81-40b7-4cbb-9ed0-146aff61e0a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82\"" Nov 8 00:17:35.005215 containerd[1465]: time="2025-11-08T00:17:35.004984190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:17:35.028987 systemd[1]: Started cri-containerd-41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c.scope - libcontainer container 41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c. Nov 8 00:17:35.053437 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:35.054784 systemd-networkd[1401]: calicd203c5bd52: Gained IPv6LL Nov 8 00:17:35.090739 containerd[1465]: time="2025-11-08T00:17:35.090611093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f7468b9b5-s94kb,Uid:0a4ccd10-3267-4054-9505-eb9db275a87f,Namespace:calico-system,Attempt:1,} returns sandbox id \"41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c\"" Nov 8 00:17:35.378379 containerd[1465]: time="2025-11-08T00:17:35.378325810Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:35.378984 systemd-networkd[1401]: cali99a1f4060ac: Link UP Nov 8 00:17:35.380385 systemd-networkd[1401]: cali99a1f4060ac: Gained carrier Nov 8 00:17:35.432699 containerd[1465]: time="2025-11-08T00:17:35.432636861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:17:35.432853 containerd[1465]: time="2025-11-08T00:17:35.432706421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:35.432964 kubelet[2522]: E1108 00:17:35.432916 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:17:35.433013 kubelet[2522]: E1108 00:17:35.432982 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:17:35.433337 containerd[1465]: time="2025-11-08T00:17:35.433307378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:17:35.433391 kubelet[2522]: E1108 00:17:35.433309 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh5vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2fwvz_calico-system(7f4fec81-40b7-4cbb-9ed0-146aff61e0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:35.434694 kubelet[2522]: E1108 00:17:35.434618 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:35.502750 systemd-networkd[1401]: cali18dda85ea61: Gained IPv6LL Nov 8 00:17:35.553383 systemd[1]: run-netns-cni\x2d11cb1870\x2dd2fe\x2d0585\x2dd6a1\x2d4d66787b0e80.mount: Deactivated successfully. Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.745 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0 calico-apiserver-8577fdc947- calico-apiserver 031c3221-a127-47c9-883a-8bd9e65d7753 1102 0 2025-11-08 00:17:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8577fdc947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8577fdc947-lzmrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99a1f4060ac [] [] }} ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.746 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.810 [INFO][4790] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" HandleID="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.811 [INFO][4790] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" HandleID="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036bc10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8577fdc947-lzmrt", "timestamp":"2025-11-08 00:17:34.810702965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.811 [INFO][4790] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.905 [INFO][4790] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.905 [INFO][4790] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.957 [INFO][4790] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.974 [INFO][4790] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.986 [INFO][4790] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.991 [INFO][4790] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.999 [INFO][4790] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:34.999 [INFO][4790] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.003 [INFO][4790] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048 Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.012 [INFO][4790] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.369 [INFO][4790] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.370 [INFO][4790] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" host="localhost" Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.370 [INFO][4790] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:35.569623 containerd[1465]: 2025-11-08 00:17:35.370 [INFO][4790] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" HandleID="k8s-pod-network.e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.375 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"031c3221-a127-47c9-883a-8bd9e65d7753", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8577fdc947-lzmrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a1f4060ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.375 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.375 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99a1f4060ac ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.380 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.381 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"031c3221-a127-47c9-883a-8bd9e65d7753", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048", Pod:"calico-apiserver-8577fdc947-lzmrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a1f4060ac", MAC:"ba:ab:77:35:de:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:35.570709 containerd[1465]: 2025-11-08 00:17:35.563 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-lzmrt" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:35.599725 containerd[1465]: time="2025-11-08T00:17:35.599516773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:35.599725 containerd[1465]: time="2025-11-08T00:17:35.599617462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:35.599725 containerd[1465]: time="2025-11-08T00:17:35.599634164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:35.599952 containerd[1465]: time="2025-11-08T00:17:35.599772002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:35.620178 systemd[1]: run-containerd-runc-k8s.io-e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048-runc.NMlpWv.mount: Deactivated successfully. Nov 8 00:17:35.635788 systemd[1]: Started cri-containerd-e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048.scope - libcontainer container e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048. Nov 8 00:17:35.649738 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:35.686619 kubelet[2522]: E1108 00:17:35.685670 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:35.688148 kubelet[2522]: E1108 00:17:35.687677 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:35.688148 kubelet[2522]: E1108 00:17:35.688043 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:35.689677 kubelet[2522]: E1108 00:17:35.689630 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:35.700083 containerd[1465]: time="2025-11-08T00:17:35.699940929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-lzmrt,Uid:031c3221-a127-47c9-883a-8bd9e65d7753,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048\"" Nov 8 00:17:35.727184 systemd-networkd[1401]: calieeda99b70e4: Link UP Nov 8 00:17:35.728017 systemd-networkd[1401]: calieeda99b70e4: Gained carrier Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:34.747 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0 calico-apiserver-8577fdc947- calico-apiserver 89b15836-7628-4868-bd25-c2735fc5d488 1101 0 2025-11-08 00:17:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8577fdc947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-8577fdc947-456nr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieeda99b70e4 [] [] }} ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:34.748 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:34.821 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" HandleID="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:34.821 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" HandleID="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000149800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-8577fdc947-456nr", "timestamp":"2025-11-08 00:17:34.821254494 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:34.821 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.370 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.370 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.563 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.669 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.680 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.685 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.689 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.689 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.691 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2 Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.696 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.709 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.709 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" host="localhost" Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.709 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:35.749032 containerd[1465]: 2025-11-08 00:17:35.709 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" HandleID="k8s-pod-network.eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.718 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b15836-7628-4868-bd25-c2735fc5d488", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-8577fdc947-456nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieeda99b70e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.719 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.719 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieeda99b70e4 ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.729 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.730 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b15836-7628-4868-bd25-c2735fc5d488", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2", Pod:"calico-apiserver-8577fdc947-456nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieeda99b70e4", MAC:"62:f2:03:fb:7a:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:35.750083 containerd[1465]: 2025-11-08 00:17:35.743 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8577fdc947-456nr" WorkloadEndpoint="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:35.780799 containerd[1465]: time="2025-11-08T00:17:35.780642614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:17:35.780799 containerd[1465]: time="2025-11-08T00:17:35.780718927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:17:35.780799 containerd[1465]: time="2025-11-08T00:17:35.780731070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:35.780996 containerd[1465]: time="2025-11-08T00:17:35.780877685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:17:35.812483 systemd[1]: Started cri-containerd-eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2.scope - libcontainer container eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2. Nov 8 00:17:35.823752 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Nov 8 00:17:35.832377 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:17:35.864822 containerd[1465]: time="2025-11-08T00:17:35.864767149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8577fdc947-456nr,Uid:89b15836-7628-4868-bd25-c2735fc5d488,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2\"" Nov 8 00:17:35.887701 containerd[1465]: time="2025-11-08T00:17:35.887525312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:35.889307 containerd[1465]: time="2025-11-08T00:17:35.889242412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:17:35.889307 containerd[1465]: time="2025-11-08T00:17:35.889341408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:17:35.889633 kubelet[2522]: E1108 00:17:35.889560 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:17:35.889719 kubelet[2522]: E1108 00:17:35.889681 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:17:35.890161 containerd[1465]: time="2025-11-08T00:17:35.890029649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:17:35.890857 kubelet[2522]: E1108 00:17:35.890047 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tz7nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f7468b9b5-s94kb_calico-system(0a4ccd10-3267-4054-9505-eb9db275a87f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:35.892092 kubelet[2522]: E1108 00:17:35.892038 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:17:36.241976 containerd[1465]: time="2025-11-08T00:17:36.241912151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:36.243308 containerd[1465]: time="2025-11-08T00:17:36.243235844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:17:36.243495 containerd[1465]: time="2025-11-08T00:17:36.243292440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:36.243597 kubelet[2522]: E1108 00:17:36.243518 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:36.243648 kubelet[2522]: E1108 00:17:36.243600 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:36.243999 kubelet[2522]: E1108 00:17:36.243911 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw59n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-lzmrt_calico-apiserver(031c3221-a127-47c9-883a-8bd9e65d7753): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:36.244180 containerd[1465]: time="2025-11-08T00:17:36.244023622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:17:36.246096 kubelet[2522]: E1108 00:17:36.245620 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:36.527242 systemd-networkd[1401]: cali77e9f58a3f3: Gained IPv6LL Nov 8 00:17:36.594952 containerd[1465]: time="2025-11-08T00:17:36.594907169Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:36.655817 containerd[1465]: time="2025-11-08T00:17:36.655750479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:36.655904 containerd[1465]: time="2025-11-08T00:17:36.655792719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:17:36.656102 kubelet[2522]: E1108 00:17:36.656051 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:36.656170 kubelet[2522]: E1108 00:17:36.656115 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:36.656330 kubelet[2522]: E1108 00:17:36.656275 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dwgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-456nr_calico-apiserver(89b15836-7628-4868-bd25-c2735fc5d488): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:36.657485 kubelet[2522]: E1108 00:17:36.657452 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:36.688872 kubelet[2522]: E1108 00:17:36.688840 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:36.689522 kubelet[2522]: E1108 00:17:36.689500 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:36.690348 kubelet[2522]: E1108 00:17:36.690320 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:36.691009 kubelet[2522]: E1108 00:17:36.690964 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:36.691009 kubelet[2522]: E1108 00:17:36.690975 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:36.691009 kubelet[2522]: E1108 00:17:36.690985 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:17:36.910787 systemd-networkd[1401]: cali8c4eb3a2af9: Gained IPv6LL Nov 8 00:17:37.102789 systemd-networkd[1401]: cali99a1f4060ac: Gained IPv6LL Nov 8 00:17:37.358977 systemd-networkd[1401]: calieeda99b70e4: Gained IPv6LL Nov 8 00:17:37.495697 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:44538.service - OpenSSH per-connection server daemon (10.0.0.1:44538). Nov 8 00:17:37.546589 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 44538 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:37.549249 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:37.554923 systemd-logind[1450]: New session 10 of user core. Nov 8 00:17:37.562821 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:17:37.694501 kubelet[2522]: E1108 00:17:37.694337 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:37.695376 kubelet[2522]: E1108 00:17:37.694747 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:37.703208 sshd[5080]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:37.708072 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:44538.service: Deactivated successfully. Nov 8 00:17:37.710876 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:17:37.711639 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:17:37.712685 systemd-logind[1450]: Removed session 10. Nov 8 00:17:42.724794 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:44542.service - OpenSSH per-connection server daemon (10.0.0.1:44542). Nov 8 00:17:42.765016 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 44542 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:42.768010 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:42.773310 systemd-logind[1450]: New session 11 of user core. Nov 8 00:17:42.783748 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:17:42.900507 sshd[5107]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:42.909336 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:44542.service: Deactivated successfully. Nov 8 00:17:42.911836 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:17:42.913654 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:17:42.921940 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:44556.service - OpenSSH per-connection server daemon (10.0.0.1:44556). Nov 8 00:17:42.923226 systemd-logind[1450]: Removed session 11. Nov 8 00:17:42.957605 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 44556 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:42.959609 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:42.964666 systemd-logind[1450]: New session 12 of user core. Nov 8 00:17:42.975725 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:17:43.138484 sshd[5122]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:43.146197 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:44556.service: Deactivated successfully. Nov 8 00:17:43.148415 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:17:43.152211 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:17:43.159928 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:44572.service - OpenSSH per-connection server daemon (10.0.0.1:44572). Nov 8 00:17:43.161689 systemd-logind[1450]: Removed session 12. Nov 8 00:17:43.200884 sshd[5134]: Accepted publickey for core from 10.0.0.1 port 44572 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:43.203082 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:43.208538 systemd-logind[1450]: New session 13 of user core. Nov 8 00:17:43.215715 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:17:43.335333 sshd[5134]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:43.340469 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:17:43.340841 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:44572.service: Deactivated successfully. Nov 8 00:17:43.342879 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:17:43.344159 systemd-logind[1450]: Removed session 13. Nov 8 00:17:45.366192 containerd[1465]: time="2025-11-08T00:17:45.366032029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:17:45.727404 containerd[1465]: time="2025-11-08T00:17:45.727356888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:45.748697 containerd[1465]: time="2025-11-08T00:17:45.748636557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:17:45.748782 containerd[1465]: time="2025-11-08T00:17:45.748711147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:17:45.748943 kubelet[2522]: E1108 00:17:45.748885 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:45.748943 kubelet[2522]: E1108 00:17:45.748944 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:17:45.749448 kubelet[2522]: E1108 00:17:45.749076 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:074ed4dfb7104ec888d28786a94cff0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:45.751446 containerd[1465]: time="2025-11-08T00:17:45.751401202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:17:46.127478 containerd[1465]: time="2025-11-08T00:17:46.127284584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:46.129923 containerd[1465]: time="2025-11-08T00:17:46.129865955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:17:46.130065 containerd[1465]: time="2025-11-08T00:17:46.129962165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:17:46.130195 kubelet[2522]: E1108 00:17:46.130146 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:46.130254 kubelet[2522]: E1108 00:17:46.130207 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:17:46.130385 kubelet[2522]: E1108 00:17:46.130346 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:46.131626 kubelet[2522]: E1108 00:17:46.131551 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b79d8578-fxtxr" podUID="53aaf5c5-a07c-4b5d-8085-2d2b30008c52" Nov 8 00:17:48.348621 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:52724.service - OpenSSH per-connection server daemon (10.0.0.1:52724). Nov 8 00:17:48.365080 containerd[1465]: time="2025-11-08T00:17:48.365032152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:17:48.386869 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 52724 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:48.388836 sshd[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:48.395869 systemd-logind[1450]: New session 14 of user core. Nov 8 00:17:48.401737 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:17:48.517174 sshd[5160]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:48.521290 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:52724.service: Deactivated successfully. Nov 8 00:17:48.523630 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:17:48.524323 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:17:48.525283 systemd-logind[1450]: Removed session 14. Nov 8 00:17:48.701416 containerd[1465]: time="2025-11-08T00:17:48.701287089Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:48.704477 containerd[1465]: time="2025-11-08T00:17:48.704398525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:17:48.704554 containerd[1465]: time="2025-11-08T00:17:48.704454691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:48.704737 kubelet[2522]: E1108 00:17:48.704679 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:48.705081 kubelet[2522]: E1108 00:17:48.704743 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:48.705081 kubelet[2522]: E1108 00:17:48.704990 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dwgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-456nr_calico-apiserver(89b15836-7628-4868-bd25-c2735fc5d488): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:48.705349 containerd[1465]: time="2025-11-08T00:17:48.705182025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:17:48.706337 kubelet[2522]: E1108 00:17:48.706283 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:17:49.164514 containerd[1465]: time="2025-11-08T00:17:49.164441447Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:49.165701 containerd[1465]: time="2025-11-08T00:17:49.165658379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:17:49.165780 containerd[1465]: time="2025-11-08T00:17:49.165739882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:17:49.165983 kubelet[2522]: E1108 00:17:49.165928 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:17:49.166034 kubelet[2522]: E1108 00:17:49.166002 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:17:49.166660 kubelet[2522]: E1108 00:17:49.166251 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tz7nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f7468b9b5-s94kb_calico-system(0a4ccd10-3267-4054-9505-eb9db275a87f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:49.166835 containerd[1465]: time="2025-11-08T00:17:49.166383729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:17:49.167534 kubelet[2522]: E1108 00:17:49.167481 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:17:49.342428 containerd[1465]: time="2025-11-08T00:17:49.341998348Z" level=info msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.381 [WARNING][5184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b15836-7628-4868-bd25-c2735fc5d488", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2", Pod:"calico-apiserver-8577fdc947-456nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieeda99b70e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.382 [INFO][5184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.382 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" iface="eth0" netns="" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.382 [INFO][5184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.382 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.404 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.405 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.405 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.413 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.413 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.416 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.422462 containerd[1465]: 2025-11-08 00:17:49.419 [INFO][5184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.422462 containerd[1465]: time="2025-11-08T00:17:49.422417919Z" level=info msg="TearDown network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" successfully" Nov 8 00:17:49.422462 containerd[1465]: time="2025-11-08T00:17:49.422439579Z" level=info msg="StopPodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" returns successfully" Nov 8 00:17:49.423337 containerd[1465]: time="2025-11-08T00:17:49.423075833Z" level=info msg="RemovePodSandbox for \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" Nov 8 00:17:49.426012 containerd[1465]: time="2025-11-08T00:17:49.425990388Z" level=info msg="Forcibly stopping sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\"" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.462 [WARNING][5214] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"89b15836-7628-4868-bd25-c2735fc5d488", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb2ae4951c3850e5106934b9e207f8134e384a8dd2c15a9441f72c04d2f17fe2", Pod:"calico-apiserver-8577fdc947-456nr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieeda99b70e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.463 [INFO][5214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.463 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" iface="eth0" netns="" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.463 [INFO][5214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.463 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.486 [INFO][5223] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.487 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.487 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.493 [WARNING][5223] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.494 [INFO][5223] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" HandleID="k8s-pod-network.a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Workload="localhost-k8s-calico--apiserver--8577fdc947--456nr-eth0" Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.495 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.501311 containerd[1465]: 2025-11-08 00:17:49.498 [INFO][5214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc" Nov 8 00:17:49.501763 containerd[1465]: time="2025-11-08T00:17:49.501383802Z" level=info msg="TearDown network for sandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" successfully" Nov 8 00:17:49.506274 containerd[1465]: time="2025-11-08T00:17:49.506229269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:49.506345 containerd[1465]: time="2025-11-08T00:17:49.506304149Z" level=info msg="RemovePodSandbox \"a2ab963a6fdc56f736a9d650f3d6ee09a271b2dd946de0f6d147988bbd3750bc\" returns successfully" Nov 8 00:17:49.507033 containerd[1465]: time="2025-11-08T00:17:49.507006367Z" level=info msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" Nov 8 00:17:49.512099 containerd[1465]: time="2025-11-08T00:17:49.512070104Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:49.513197 containerd[1465]: time="2025-11-08T00:17:49.513135482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:17:49.513253 containerd[1465]: time="2025-11-08T00:17:49.513190875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:49.513453 kubelet[2522]: E1108 00:17:49.513397 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:49.513558 kubelet[2522]: E1108 00:17:49.513463 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:17:49.513793 kubelet[2522]: E1108 00:17:49.513688 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw59n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-lzmrt_calico-apiserver(031c3221-a127-47c9-883a-8bd9e65d7753): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:49.517603 kubelet[2522]: E1108 00:17:49.515084 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.544 [WARNING][5241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2fwvz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82", Pod:"goldmane-666569f655-2fwvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77e9f58a3f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.544 [INFO][5241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.544 [INFO][5241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" iface="eth0" netns="" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.544 [INFO][5241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.544 [INFO][5241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.569 [INFO][5249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.569 [INFO][5249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.569 [INFO][5249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.578 [WARNING][5249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.578 [INFO][5249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.579 [INFO][5249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.586401 containerd[1465]: 2025-11-08 00:17:49.583 [INFO][5241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.586846 containerd[1465]: time="2025-11-08T00:17:49.586458292Z" level=info msg="TearDown network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" successfully" Nov 8 00:17:49.586846 containerd[1465]: time="2025-11-08T00:17:49.586487106Z" level=info msg="StopPodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" returns successfully" Nov 8 00:17:49.587062 containerd[1465]: time="2025-11-08T00:17:49.587033130Z" level=info msg="RemovePodSandbox for \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" Nov 8 00:17:49.587115 containerd[1465]: time="2025-11-08T00:17:49.587073235Z" level=info msg="Forcibly stopping sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\"" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.630 [WARNING][5266] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--2fwvz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f4fec81-40b7-4cbb-9ed0-146aff61e0a7", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17c3e048dc5259a5a3a54c6b7441ec0857663733065a67cd4a74a7dec9adcb82", Pod:"goldmane-666569f655-2fwvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali77e9f58a3f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.630 [INFO][5266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.630 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" iface="eth0" netns="" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.630 [INFO][5266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.630 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.657 [INFO][5275] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.657 [INFO][5275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.657 [INFO][5275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.664 [WARNING][5275] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.664 [INFO][5275] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" HandleID="k8s-pod-network.9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Workload="localhost-k8s-goldmane--666569f655--2fwvz-eth0" Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.665 [INFO][5275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.672279 containerd[1465]: 2025-11-08 00:17:49.669 [INFO][5266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb" Nov 8 00:17:49.672807 containerd[1465]: time="2025-11-08T00:17:49.672323545Z" level=info msg="TearDown network for sandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" successfully" Nov 8 00:17:49.684020 containerd[1465]: time="2025-11-08T00:17:49.683984273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:49.684100 containerd[1465]: time="2025-11-08T00:17:49.684049495Z" level=info msg="RemovePodSandbox \"9f2736640316f98b5eaec4bf69830dcdc1019c7cb397aca0fc10b229c8f188eb\" returns successfully" Nov 8 00:17:49.684778 containerd[1465]: time="2025-11-08T00:17:49.684733218Z" level=info msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.720 [WARNING][5293] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60f79ede-23ce-4941-9970-af1b19912c40", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884", Pod:"coredns-674b8bbfcf-sdjt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484c2ec8182", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.721 [INFO][5293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.721 [INFO][5293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" iface="eth0" netns="" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.721 [INFO][5293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.721 [INFO][5293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.745 [INFO][5301] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.745 [INFO][5301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.746 [INFO][5301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.752 [WARNING][5301] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.752 [INFO][5301] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.753 [INFO][5301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.759725 containerd[1465]: 2025-11-08 00:17:49.756 [INFO][5293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.760268 containerd[1465]: time="2025-11-08T00:17:49.759772478Z" level=info msg="TearDown network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" successfully" Nov 8 00:17:49.760268 containerd[1465]: time="2025-11-08T00:17:49.759803125Z" level=info msg="StopPodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" returns successfully" Nov 8 00:17:49.760268 containerd[1465]: time="2025-11-08T00:17:49.760204668Z" level=info msg="RemovePodSandbox for \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" Nov 8 00:17:49.760268 containerd[1465]: time="2025-11-08T00:17:49.760237270Z" level=info msg="Forcibly stopping sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\"" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.795 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"60f79ede-23ce-4941-9970-af1b19912c40", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b810bfb1b47deaf5965c7214622cff91988cff0266bdc020a9cc03d5c0916884", Pod:"coredns-674b8bbfcf-sdjt5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali484c2ec8182", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.795 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.795 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" iface="eth0" netns="" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.795 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.795 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.816 [INFO][5329] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.816 [INFO][5329] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.816 [INFO][5329] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.822 [WARNING][5329] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.822 [INFO][5329] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" HandleID="k8s-pod-network.5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Workload="localhost-k8s-coredns--674b8bbfcf--sdjt5-eth0" Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.823 [INFO][5329] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.829310 containerd[1465]: 2025-11-08 00:17:49.826 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b" Nov 8 00:17:49.829770 containerd[1465]: time="2025-11-08T00:17:49.829366496Z" level=info msg="TearDown network for sandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" successfully" Nov 8 00:17:49.833602 containerd[1465]: time="2025-11-08T00:17:49.833531557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:49.833602 containerd[1465]: time="2025-11-08T00:17:49.833612689Z" level=info msg="RemovePodSandbox \"5eecf228d7fb243bb07d03fded3b7bf63e9a490d1a70a6fddc73bacd9944761b\" returns successfully" Nov 8 00:17:49.834208 containerd[1465]: time="2025-11-08T00:17:49.834126663Z" level=info msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.868 [WARNING][5347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"031c3221-a127-47c9-883a-8bd9e65d7753", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048", Pod:"calico-apiserver-8577fdc947-lzmrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a1f4060ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.868 [INFO][5347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.868 [INFO][5347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" iface="eth0" netns="" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.868 [INFO][5347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.868 [INFO][5347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.888 [INFO][5356] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.888 [INFO][5356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.888 [INFO][5356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.894 [WARNING][5356] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.894 [INFO][5356] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.896 [INFO][5356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.902226 containerd[1465]: 2025-11-08 00:17:49.899 [INFO][5347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.902748 containerd[1465]: time="2025-11-08T00:17:49.902275169Z" level=info msg="TearDown network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" successfully" Nov 8 00:17:49.902748 containerd[1465]: time="2025-11-08T00:17:49.902305566Z" level=info msg="StopPodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" returns successfully" Nov 8 00:17:49.902904 containerd[1465]: time="2025-11-08T00:17:49.902878942Z" level=info msg="RemovePodSandbox for \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" Nov 8 00:17:49.902949 containerd[1465]: time="2025-11-08T00:17:49.902909178Z" level=info msg="Forcibly stopping sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\"" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.942 [WARNING][5373] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0", GenerateName:"calico-apiserver-8577fdc947-", Namespace:"calico-apiserver", SelfLink:"", UID:"031c3221-a127-47c9-883a-8bd9e65d7753", ResourceVersion:"1196", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8577fdc947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e22321c1072d12852f0c8f604babdd5afb4061ea22fdf253d10dd4e246f68048", Pod:"calico-apiserver-8577fdc947-lzmrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99a1f4060ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.942 [INFO][5373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.942 [INFO][5373] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" iface="eth0" netns="" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.942 [INFO][5373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.942 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.963 [INFO][5382] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.963 [INFO][5382] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.963 [INFO][5382] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.969 [WARNING][5382] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.970 [INFO][5382] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" HandleID="k8s-pod-network.db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Workload="localhost-k8s-calico--apiserver--8577fdc947--lzmrt-eth0" Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.971 [INFO][5382] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:49.977946 containerd[1465]: 2025-11-08 00:17:49.974 [INFO][5373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150" Nov 8 00:17:49.978491 containerd[1465]: time="2025-11-08T00:17:49.977983113Z" level=info msg="TearDown network for sandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" successfully" Nov 8 00:17:49.986361 containerd[1465]: time="2025-11-08T00:17:49.986317863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:49.986420 containerd[1465]: time="2025-11-08T00:17:49.986365593Z" level=info msg="RemovePodSandbox \"db2133efc2fa926d2b56fc29493006ccd571df1ff2aa21cf4f99a4b8baa69150\" returns successfully" Nov 8 00:17:49.987014 containerd[1465]: time="2025-11-08T00:17:49.986974445Z" level=info msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.023 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vdtmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485c28c7-3ce9-4d8e-9396-e75393354e2f", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382", Pod:"csi-node-driver-vdtmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dda85ea61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.023 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.023 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" iface="eth0" netns="" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.023 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.023 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.044 [INFO][5409] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.045 [INFO][5409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.045 [INFO][5409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.051 [WARNING][5409] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.051 [INFO][5409] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.052 [INFO][5409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.059016 containerd[1465]: 2025-11-08 00:17:50.055 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.059496 containerd[1465]: time="2025-11-08T00:17:50.059066687Z" level=info msg="TearDown network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" successfully" Nov 8 00:17:50.059496 containerd[1465]: time="2025-11-08T00:17:50.059092716Z" level=info msg="StopPodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" returns successfully" Nov 8 00:17:50.059809 containerd[1465]: time="2025-11-08T00:17:50.059781217Z" level=info msg="RemovePodSandbox for \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" Nov 8 00:17:50.059854 containerd[1465]: time="2025-11-08T00:17:50.059817275Z" level=info msg="Forcibly stopping sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\"" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.098 [WARNING][5427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vdtmc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"485c28c7-3ce9-4d8e-9396-e75393354e2f", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b4b1f876f0e5898a8b19d879e4af6c3ce7dafa5e1cb591261ad30fdbcc3a382", Pod:"csi-node-driver-vdtmc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18dda85ea61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.098 [INFO][5427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.098 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" iface="eth0" netns="" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.098 [INFO][5427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.098 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.123 [INFO][5436] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.123 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.123 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.129 [WARNING][5436] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.129 [INFO][5436] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" HandleID="k8s-pod-network.f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Workload="localhost-k8s-csi--node--driver--vdtmc-eth0" Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.132 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.138312 containerd[1465]: 2025-11-08 00:17:50.135 [INFO][5427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd" Nov 8 00:17:50.138845 containerd[1465]: time="2025-11-08T00:17:50.138350868Z" level=info msg="TearDown network for sandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" successfully" Nov 8 00:17:50.142951 containerd[1465]: time="2025-11-08T00:17:50.142909357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:50.143021 containerd[1465]: time="2025-11-08T00:17:50.142955694Z" level=info msg="RemovePodSandbox \"f7927cc4384c01ad047b0e0393714b9be96369959ca6ea63d070116dcdbd5cfd\" returns successfully" Nov 8 00:17:50.143659 containerd[1465]: time="2025-11-08T00:17:50.143614590Z" level=info msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.177 [WARNING][5455] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" WorkloadEndpoint="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.178 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.178 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" iface="eth0" netns="" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.178 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.178 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.201 [INFO][5463] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.201 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.201 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.207 [WARNING][5463] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.207 [INFO][5463] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.208 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.214885 containerd[1465]: 2025-11-08 00:17:50.212 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.215392 containerd[1465]: time="2025-11-08T00:17:50.214937419Z" level=info msg="TearDown network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" successfully" Nov 8 00:17:50.215392 containerd[1465]: time="2025-11-08T00:17:50.214963728Z" level=info msg="StopPodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" returns successfully" Nov 8 00:17:50.215535 containerd[1465]: time="2025-11-08T00:17:50.215494664Z" level=info msg="RemovePodSandbox for \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" Nov 8 00:17:50.215630 containerd[1465]: time="2025-11-08T00:17:50.215537694Z" level=info msg="Forcibly stopping sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\"" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.255 [WARNING][5481] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" WorkloadEndpoint="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.255 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.255 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" iface="eth0" netns="" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.255 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.255 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.278 [INFO][5491] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.279 [INFO][5491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.279 [INFO][5491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.285 [WARNING][5491] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.285 [INFO][5491] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" HandleID="k8s-pod-network.974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Workload="localhost-k8s-whisker--cd5855f48--b5fpd-eth0" Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.286 [INFO][5491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.292627 containerd[1465]: 2025-11-08 00:17:50.289 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06" Nov 8 00:17:50.292627 containerd[1465]: time="2025-11-08T00:17:50.292587115Z" level=info msg="TearDown network for sandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" successfully" Nov 8 00:17:50.296835 containerd[1465]: time="2025-11-08T00:17:50.296805425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:50.296914 containerd[1465]: time="2025-11-08T00:17:50.296853035Z" level=info msg="RemovePodSandbox \"974ef15d4e168051f095b48a10e61901a8734d6e21716b218caff5f044f7ca06\" returns successfully" Nov 8 00:17:50.297477 containerd[1465]: time="2025-11-08T00:17:50.297436990Z" level=info msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" Nov 8 00:17:50.365633 containerd[1465]: time="2025-11-08T00:17:50.365566410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.337 [WARNING][5509] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0", GenerateName:"calico-kube-controllers-5f7468b9b5-", Namespace:"calico-system", SelfLink:"", UID:"0a4ccd10-3267-4054-9505-eb9db275a87f", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7468b9b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c", Pod:"calico-kube-controllers-5f7468b9b5-s94kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c4eb3a2af9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.337 [INFO][5509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.337 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" iface="eth0" netns="" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.337 [INFO][5509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.337 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.359 [INFO][5518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.359 [INFO][5518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.359 [INFO][5518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.365 [WARNING][5518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.365 [INFO][5518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.366 [INFO][5518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.374287 containerd[1465]: 2025-11-08 00:17:50.370 [INFO][5509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.374827 containerd[1465]: time="2025-11-08T00:17:50.374325036Z" level=info msg="TearDown network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" successfully" Nov 8 00:17:50.374827 containerd[1465]: time="2025-11-08T00:17:50.374353139Z" level=info msg="StopPodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" returns successfully" Nov 8 00:17:50.374942 containerd[1465]: time="2025-11-08T00:17:50.374917097Z" level=info msg="RemovePodSandbox for \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" Nov 8 00:17:50.374973 containerd[1465]: time="2025-11-08T00:17:50.374950740Z" level=info msg="Forcibly stopping sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\"" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.415 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0", GenerateName:"calico-kube-controllers-5f7468b9b5-", Namespace:"calico-system", SelfLink:"", UID:"0a4ccd10-3267-4054-9505-eb9db275a87f", ResourceVersion:"1278", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 17, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f7468b9b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41823d0dfcee601f5d41b6bf35d77dd0b7d28097b30d13c814d73ce08c99ca7c", Pod:"calico-kube-controllers-5f7468b9b5-s94kb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c4eb3a2af9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.415 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.415 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" iface="eth0" netns="" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.415 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.415 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.438 [INFO][5544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.438 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.438 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.445 [WARNING][5544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.445 [INFO][5544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" HandleID="k8s-pod-network.7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Workload="localhost-k8s-calico--kube--controllers--5f7468b9b5--s94kb-eth0" Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.446 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.452723 containerd[1465]: 2025-11-08 00:17:50.450 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089" Nov 8 00:17:50.453594 containerd[1465]: time="2025-11-08T00:17:50.452788828Z" level=info msg="TearDown network for sandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" successfully" Nov 8 00:17:50.467341 containerd[1465]: time="2025-11-08T00:17:50.467281378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:50.467475 containerd[1465]: time="2025-11-08T00:17:50.467355236Z" level=info msg="RemovePodSandbox \"7bac473ec888b45dc24b7c98ce16c45f85cc18ca393335ededcf1412acf2a089\" returns successfully" Nov 8 00:17:50.467950 containerd[1465]: time="2025-11-08T00:17:50.467909987Z" level=info msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.507 [WARNING][5562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kqptp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67db3da6-15c7-4668-a4a2-6c4899b5791f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9", Pod:"coredns-674b8bbfcf-kqptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd203c5bd52", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.507 [INFO][5562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.507 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" iface="eth0" netns="" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.507 [INFO][5562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.507 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.529 [INFO][5571] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.529 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.530 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.537 [WARNING][5571] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.537 [INFO][5571] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.539 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.545317 containerd[1465]: 2025-11-08 00:17:50.542 [INFO][5562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.545317 containerd[1465]: time="2025-11-08T00:17:50.545278604Z" level=info msg="TearDown network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" successfully" Nov 8 00:17:50.545317 containerd[1465]: time="2025-11-08T00:17:50.545313039Z" level=info msg="StopPodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" returns successfully" Nov 8 00:17:50.546223 containerd[1465]: time="2025-11-08T00:17:50.546185655Z" level=info msg="RemovePodSandbox for \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" Nov 8 00:17:50.546223 containerd[1465]: time="2025-11-08T00:17:50.546220070Z" level=info msg="Forcibly stopping sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\"" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.580 [WARNING][5589] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--kqptp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"67db3da6-15c7-4668-a4a2-6c4899b5791f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 16, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"443148293689b114bae18bf313ae9c07d0ea036500c7646795471b874e2e1cf9", Pod:"coredns-674b8bbfcf-kqptp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd203c5bd52", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.580 [INFO][5589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.580 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" iface="eth0" netns="" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.580 [INFO][5589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.580 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.602 [INFO][5598] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.602 [INFO][5598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.602 [INFO][5598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.609 [WARNING][5598] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.609 [INFO][5598] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" HandleID="k8s-pod-network.251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Workload="localhost-k8s-coredns--674b8bbfcf--kqptp-eth0" Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.611 [INFO][5598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:17:50.617554 containerd[1465]: 2025-11-08 00:17:50.614 [INFO][5589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8" Nov 8 00:17:50.618060 containerd[1465]: time="2025-11-08T00:17:50.617634602Z" level=info msg="TearDown network for sandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" successfully" Nov 8 00:17:50.621726 containerd[1465]: time="2025-11-08T00:17:50.621700006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:17:50.621788 containerd[1465]: time="2025-11-08T00:17:50.621744279Z" level=info msg="RemovePodSandbox \"251e1cf8a2295d7417e080868540ef33ab37ab931d716519be4d5d8f56cd39c8\" returns successfully" Nov 8 00:17:50.729565 containerd[1465]: time="2025-11-08T00:17:50.729525192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:50.738579 containerd[1465]: time="2025-11-08T00:17:50.738536321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:17:50.738662 containerd[1465]: time="2025-11-08T00:17:50.738589732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:17:50.738792 kubelet[2522]: E1108 00:17:50.738747 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:17:50.739203 kubelet[2522]: E1108 00:17:50.738793 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:17:50.739203 kubelet[2522]: E1108 00:17:50.739016 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh5vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2fwvz_calico-system(7f4fec81-40b7-4cbb-9ed0-146aff61e0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:50.739334 containerd[1465]: time="2025-11-08T00:17:50.739052640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:17:50.740188 kubelet[2522]: E1108 00:17:50.740150 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:17:51.091762 containerd[1465]: time="2025-11-08T00:17:51.091697484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:51.092939 containerd[1465]: time="2025-11-08T00:17:51.092891793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:17:51.093102 containerd[1465]: time="2025-11-08T00:17:51.092970792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:17:51.093253 kubelet[2522]: E1108 00:17:51.093208 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:17:51.093339 kubelet[2522]: E1108 00:17:51.093264 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:17:51.093461 kubelet[2522]: E1108 00:17:51.093412 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:51.095550 containerd[1465]: time="2025-11-08T00:17:51.095492881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:17:51.459875 containerd[1465]: time="2025-11-08T00:17:51.459808141Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:17:51.461126 containerd[1465]: time="2025-11-08T00:17:51.461067332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:17:51.461175 containerd[1465]: time="2025-11-08T00:17:51.461115312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:17:51.461421 kubelet[2522]: E1108 00:17:51.461359 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:17:51.461481 kubelet[2522]: E1108 00:17:51.461435 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:17:51.461731 kubelet[2522]: E1108 00:17:51.461648 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:17:51.462916 kubelet[2522]: E1108 00:17:51.462860 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:17:53.530699 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:52738.service - OpenSSH per-connection server daemon (10.0.0.1:52738). Nov 8 00:17:53.572244 sshd[5608]: Accepted publickey for core from 10.0.0.1 port 52738 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:53.574528 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:53.579777 systemd-logind[1450]: New session 15 of user core. Nov 8 00:17:53.585749 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:17:53.716130 sshd[5608]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:53.720384 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:52738.service: Deactivated successfully. Nov 8 00:17:53.723279 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:17:53.725644 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:17:53.727145 systemd-logind[1450]: Removed session 15. Nov 8 00:17:57.364510 kubelet[2522]: E1108 00:17:57.364430 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:17:58.729911 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:54944.service - OpenSSH per-connection server daemon (10.0.0.1:54944). Nov 8 00:17:58.770028 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 54944 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:17:58.771892 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:17:58.776221 systemd-logind[1450]: New session 16 of user core. Nov 8 00:17:58.786723 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:17:58.896155 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 8 00:17:58.900930 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:54944.service: Deactivated successfully. Nov 8 00:17:58.903051 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:17:58.903697 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:17:58.904598 systemd-logind[1450]: Removed session 16. Nov 8 00:17:59.364783 kubelet[2522]: E1108 00:17:59.364701 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:18:01.366136 kubelet[2522]: E1108 00:18:01.365621 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b79d8578-fxtxr" podUID="53aaf5c5-a07c-4b5d-8085-2d2b30008c52" Nov 8 00:18:01.366755 kubelet[2522]: E1108 00:18:01.366328 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:18:02.729006 kubelet[2522]: E1108 00:18:02.728938 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:03.366542 kubelet[2522]: E1108 00:18:03.366478 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:18:03.920804 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:54946.service - OpenSSH per-connection server daemon (10.0.0.1:54946). Nov 8 00:18:03.960859 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 54946 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:03.962749 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:03.968341 systemd-logind[1450]: New session 17 of user core. Nov 8 00:18:03.977780 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:18:04.102945 sshd[5671]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:04.113049 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:54946.service: Deactivated successfully. Nov 8 00:18:04.115192 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:18:04.117061 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:18:04.125434 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:54954.service - OpenSSH per-connection server daemon (10.0.0.1:54954). Nov 8 00:18:04.126512 systemd-logind[1450]: Removed session 17. Nov 8 00:18:04.161952 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 54954 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:04.163798 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:04.168244 systemd-logind[1450]: New session 18 of user core. Nov 8 00:18:04.175151 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:18:04.518488 sshd[5687]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:04.529476 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:54954.service: Deactivated successfully. Nov 8 00:18:04.532170 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:18:04.534698 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:18:04.546032 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:54970.service - OpenSSH per-connection server daemon (10.0.0.1:54970). Nov 8 00:18:04.547323 systemd-logind[1450]: Removed session 18. Nov 8 00:18:04.586433 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 54970 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:04.588434 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:04.593245 systemd-logind[1450]: New session 19 of user core. Nov 8 00:18:04.602732 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:18:05.123400 sshd[5699]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:05.138954 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:54970.service: Deactivated successfully. Nov 8 00:18:05.142006 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:18:05.144204 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:18:05.152895 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:54984.service - OpenSSH per-connection server daemon (10.0.0.1:54984). Nov 8 00:18:05.153998 systemd-logind[1450]: Removed session 19. Nov 8 00:18:05.191447 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 54984 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:05.193341 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:05.197591 systemd-logind[1450]: New session 20 of user core. Nov 8 00:18:05.203724 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:18:05.476501 sshd[5723]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:05.485314 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:54984.service: Deactivated successfully. Nov 8 00:18:05.487776 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:18:05.490204 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:18:05.500059 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:54992.service - OpenSSH per-connection server daemon (10.0.0.1:54992). Nov 8 00:18:05.501072 systemd-logind[1450]: Removed session 20. Nov 8 00:18:05.538002 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 54992 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:05.539744 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:05.544462 systemd-logind[1450]: New session 21 of user core. Nov 8 00:18:05.552798 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:18:05.705608 sshd[5736]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:05.709030 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:54992.service: Deactivated successfully. Nov 8 00:18:05.711564 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:18:05.713284 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:18:05.714969 systemd-logind[1450]: Removed session 21. Nov 8 00:18:06.367855 kubelet[2522]: E1108 00:18:06.367788 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f" Nov 8 00:18:06.368445 kubelet[2522]: E1108 00:18:06.367877 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:18:10.717638 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:45852.service - OpenSSH per-connection server daemon (10.0.0.1:45852). Nov 8 00:18:10.757833 sshd[5753]: Accepted publickey for core from 10.0.0.1 port 45852 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:10.759918 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:10.764924 systemd-logind[1450]: New session 22 of user core. Nov 8 00:18:10.772774 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:18:10.922373 sshd[5753]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:10.927443 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:18:10.929048 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:45852.service: Deactivated successfully. Nov 8 00:18:10.934634 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:18:10.938074 systemd-logind[1450]: Removed session 22. Nov 8 00:18:12.365475 containerd[1465]: time="2025-11-08T00:18:12.365414406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:12.756475 containerd[1465]: time="2025-11-08T00:18:12.756409609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:12.757784 containerd[1465]: time="2025-11-08T00:18:12.757744254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:12.757978 containerd[1465]: time="2025-11-08T00:18:12.757834723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:12.758093 kubelet[2522]: E1108 00:18:12.758026 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:12.758535 kubelet[2522]: E1108 00:18:12.758111 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:12.758535 kubelet[2522]: E1108 00:18:12.758318 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dwgc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-456nr_calico-apiserver(89b15836-7628-4868-bd25-c2735fc5d488): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:12.759644 kubelet[2522]: E1108 00:18:12.759547 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-456nr" podUID="89b15836-7628-4868-bd25-c2735fc5d488" Nov 8 00:18:13.374148 containerd[1465]: time="2025-11-08T00:18:13.374085724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:18:13.721693 containerd[1465]: time="2025-11-08T00:18:13.721623873Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:13.723025 containerd[1465]: time="2025-11-08T00:18:13.722959120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:18:13.723132 containerd[1465]: time="2025-11-08T00:18:13.722999776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:18:13.723299 kubelet[2522]: E1108 00:18:13.723256 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:13.723395 kubelet[2522]: E1108 00:18:13.723315 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:18:13.723498 kubelet[2522]: E1108 00:18:13.723464 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:074ed4dfb7104ec888d28786a94cff0e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:13.725742 containerd[1465]: time="2025-11-08T00:18:13.725715404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:18:14.091870 containerd[1465]: time="2025-11-08T00:18:14.091706583Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:14.092841 containerd[1465]: time="2025-11-08T00:18:14.092806473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:18:14.092927 containerd[1465]: time="2025-11-08T00:18:14.092878607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:14.093133 kubelet[2522]: E1108 00:18:14.093087 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:14.093521 kubelet[2522]: E1108 00:18:14.093156 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:18:14.093521 kubelet[2522]: E1108 00:18:14.093309 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gscnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b79d8578-fxtxr_calico-system(53aaf5c5-a07c-4b5d-8085-2d2b30008c52): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:14.094802 kubelet[2522]: E1108 00:18:14.094749 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b79d8578-fxtxr" podUID="53aaf5c5-a07c-4b5d-8085-2d2b30008c52" Nov 8 00:18:15.936059 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:45864.service - OpenSSH per-connection server daemon (10.0.0.1:45864). Nov 8 00:18:15.974865 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 45864 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:15.977300 sshd[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:15.983709 systemd-logind[1450]: New session 23 of user core. Nov 8 00:18:15.994819 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:18:16.151321 sshd[5778]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:16.156138 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:45864.service: Deactivated successfully. Nov 8 00:18:16.159183 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:18:16.160288 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:18:16.162237 systemd-logind[1450]: Removed session 23. Nov 8 00:18:16.365215 containerd[1465]: time="2025-11-08T00:18:16.365152576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:18:16.741257 containerd[1465]: time="2025-11-08T00:18:16.741201360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:16.742322 containerd[1465]: time="2025-11-08T00:18:16.742266606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:18:16.742383 containerd[1465]: time="2025-11-08T00:18:16.742320125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:18:16.742548 kubelet[2522]: E1108 00:18:16.742498 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:16.742972 kubelet[2522]: E1108 00:18:16.742558 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:18:16.742972 kubelet[2522]: E1108 00:18:16.742889 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tz7nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f7468b9b5-s94kb_calico-system(0a4ccd10-3267-4054-9505-eb9db275a87f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:16.743164 containerd[1465]: time="2025-11-08T00:18:16.742941063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:18:16.744677 kubelet[2522]: E1108 00:18:16.744622 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f7468b9b5-s94kb" podUID="0a4ccd10-3267-4054-9505-eb9db275a87f" Nov 8 00:18:17.104672 containerd[1465]: time="2025-11-08T00:18:17.104378207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:17.105813 containerd[1465]: time="2025-11-08T00:18:17.105760103Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:18:17.105896 containerd[1465]: time="2025-11-08T00:18:17.105813221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:17.106074 kubelet[2522]: E1108 00:18:17.106035 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:17.106136 kubelet[2522]: E1108 00:18:17.106088 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:18:17.106274 kubelet[2522]: E1108 00:18:17.106235 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lw59n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8577fdc947-lzmrt_calico-apiserver(031c3221-a127-47c9-883a-8bd9e65d7753): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:17.107737 kubelet[2522]: E1108 00:18:17.107703 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8577fdc947-lzmrt" podUID="031c3221-a127-47c9-883a-8bd9e65d7753" Nov 8 00:18:17.367437 containerd[1465]: time="2025-11-08T00:18:17.366086793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:18:17.752299 containerd[1465]: time="2025-11-08T00:18:17.752243887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:17.753651 containerd[1465]: time="2025-11-08T00:18:17.753604202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:18:17.753786 containerd[1465]: time="2025-11-08T00:18:17.753691484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:18:17.753886 kubelet[2522]: E1108 00:18:17.753828 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:17.753886 kubelet[2522]: E1108 00:18:17.753891 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:18:17.754428 kubelet[2522]: E1108 00:18:17.754090 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh5vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2fwvz_calico-system(7f4fec81-40b7-4cbb-9ed0-146aff61e0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:17.755721 kubelet[2522]: E1108 00:18:17.755668 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2fwvz" podUID="7f4fec81-40b7-4cbb-9ed0-146aff61e0a7" Nov 8 00:18:20.364067 kubelet[2522]: E1108 00:18:20.364010 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:18:21.178905 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:59568.service - OpenSSH per-connection server daemon (10.0.0.1:59568). Nov 8 00:18:21.217866 sshd[5793]: Accepted publickey for core from 10.0.0.1 port 59568 ssh2: RSA SHA256:EwQa33xXnp/Z8X6q+SbOa1gxW/miZLWDaHfAZaJSUdc Nov 8 00:18:21.220020 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:18:21.224709 systemd-logind[1450]: New session 24 of user core. Nov 8 00:18:21.234786 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:18:21.366374 sshd[5793]: pam_unix(sshd:session): session closed for user core Nov 8 00:18:21.369915 containerd[1465]: time="2025-11-08T00:18:21.369510266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:18:21.377324 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:18:21.381253 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:59568.service: Deactivated successfully. Nov 8 00:18:21.385505 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:18:21.392212 systemd-logind[1450]: Removed session 24. Nov 8 00:18:21.750138 containerd[1465]: time="2025-11-08T00:18:21.750080217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:21.751434 containerd[1465]: time="2025-11-08T00:18:21.751361878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:18:21.751434 containerd[1465]: time="2025-11-08T00:18:21.751392163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:18:21.751703 kubelet[2522]: E1108 00:18:21.751634 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:21.752100 kubelet[2522]: E1108 00:18:21.751710 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:18:21.752100 kubelet[2522]: E1108 00:18:21.751914 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:21.754056 containerd[1465]: time="2025-11-08T00:18:21.753946007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:18:22.075684 containerd[1465]: time="2025-11-08T00:18:22.075473709Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:18:22.077460 containerd[1465]: time="2025-11-08T00:18:22.077311136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:18:22.077460 containerd[1465]: time="2025-11-08T00:18:22.077357222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:18:22.077700 kubelet[2522]: E1108 00:18:22.077629 2522 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:22.077756 kubelet[2522]: E1108 00:18:22.077705 2522 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:18:22.077937 kubelet[2522]: E1108 00:18:22.077867 2522 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2vg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vdtmc_calico-system(485c28c7-3ce9-4d8e-9396-e75393354e2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:18:22.079272 kubelet[2522]: E1108 00:18:22.079226 2522 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vdtmc" podUID="485c28c7-3ce9-4d8e-9396-e75393354e2f"